text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Exit rate**
Exit rate:
Exit rate is a term used in web site traffic analysis and oil and gas production, as well as a financial term. Please, note there is a difference between exit and bounce rate.Exit rate as a term used in web site traffic analysis (sometimes confused with bounce rate) is the percentage of visitors to a page on the website from which they exit the website to a different website. The visitors just exited from that specific page.
Exit rate:
Exit rate as an Upstream (petroleum industry) term refers to the rate of production of oil and/or gas as of a specified date. Often this will be the projected rate at the next year-end.
Exit rate as a financial term refers to the revenue or cost to be expected in the following fiscal period as a derivative of the performance in the current period.
Exit rate:
When used in the context of revenue, exit rate refers to the income expected in following periods as a result of sales closed in the existing period. If a company worked throughout the year and signed deals that will generate a million dollars a year in following years, then the company has a one million dollar per year exit rate in this year.
Exit rate:
When used in the context of costs, exit rate refers to the costs expected in following periods as a result of recurring costs taken on during the existing period. If a company took on headcount and recurring costs of a million dollars per year during a given fiscal year, then the working budget for that company would have a cost exit rate of a million dollars per year in that year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TRIM29**
TRIM29:
Tripartite motif-containing protein 29 is a protein that in humans is encoded by the TRIM29 gene.
Function:
The protein encoded by this gene belongs to the TRIM protein family. It has multiple zinc finger motifs and a leucine zipper motif. It has been proposed to form homo- or heterodimers which are involved in nucleic acid binding. Thus, it may act as a transcriptional regulatory factor involved in carcinogenesis and/or differentiation. It may also function in the suppression of radiosensitivity since it is associated with ataxia–telangiectasia phenotype.
Interactions:
TRIM29 has been shown to interact with TRIM23 and GCC1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jennifer Mueller**
Jennifer Mueller:
Jennifer L. Mueller is an applied mathematician and biomedical engineer whose research concerns inverse problems and their applications, particularly to problems in medical imaging related to electrical impedance tomography. She is a professor of mathematics at Colorado State University, where she also holds a joint appointment in the school of biomedical engineering and the department of electrical and computer engineering.
Education and career:
Mueller completed a Ph.D. in 1997 at the University of Nebraska–Lincoln. Her dissertation, Inverse Problems in Singular Differential Equations, was supervised by Thomas S. Shores. After postdoctoral research at the Rensselaer Polytechnic Institute she joined the Colorado State Mathematics Department in 2000, and became a founding member of the School of Biomedical Engineering in 2007. She was promoted to full professor in 2011.
Book:
With Samuli Siltanen, Mueller is the author of the book Linear and Nonlinear Inverse Problems with Practical Applications (SIAM, 2012). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Planetary science**
Planetary science:
Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants, aiming to determine their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science, and now incorporates many disciplines, including planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology.
Planetary science:
There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling.
Planetary science:
Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics, geophysics, or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer-reviewed journals. Some planetary scientists work at private research centres and often initiate partnership research tasks.
History:
The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus, who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water.
History:
In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself".Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft, such as space probes.
History:
The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System.
Disciplines:
Planetary science studies observational and theoretical astronomy, geology (astrogeology), atmospheric science, and an emerging subspecialty in planetary oceans, called planetary oceanography.
Disciplines:
Planetary astronomy This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood.
Disciplines:
Theoretical planetary astronomy is concerned with dynamics: the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology, is a major area of research besides Solar System studies. Every planet has its own branch.
Disciplines:
Planetary geology In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighboring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology, geophysics and geochemistry to planetary bodies.
Disciplines:
Planetary geomorphology Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features: Impact features (multi-ringed basins, craters) Volcanic and tectonic features (lava flows, fissures, rilles) Glacial features Aeolian features Space weathering – erosional effects generated by the harsh environment of space (continuous micrometeorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micrometeorite bombardment.
Disciplines:
Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System. This category includes the study of paleohydrological features (paleochannels, paleolakes).The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon.
Disciplines:
Cosmochemistry, geochemistry and petrology One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine.
Disciplines:
The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta.
Disciplines:
The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert.
Disciplines:
During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of April 2008 there are 54 meteorites that have been officially classified as lunar.
Disciplines:
Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg.
Planetary geophysics and space physics Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Disciplines:
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
Disciplines:
If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts.
Disciplines:
Planetary geophysics includes, but is not limited to, seismology and tectonophysics, geophysical fluid dynamics, mineral physics, geodynamics, mathematical geophysics, and geophysical surveying.
Disciplines:
Planetary geodesy Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than 10 km (6 mi), few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, 15 km (9 mi), would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly 10 km (6 mi) in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is 27 km (17 mi) high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid (areoid is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy.
Disciplines:
Planetary atmospheric science An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four gas giant planets, three of the four terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury.
Disciplines:
The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn.
Planetary oceanography Exoplanetology Exoplanetology studies exoplanets, the planets existing outside our Solar System. Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology, exoplanetology has become a rapidly developing subfield of astronomy.
Comparative planetary science:
Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples.
Comparative planetary science:
The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science.
The use of terrestrial analogs was first described by Gilbert (1886).
In Fiction:
In Frank Herbert's 1965 Science Fiction Novel Dune, the major secondary character Liet-Kynes serves as the "Imperial Planetologist" for the fictional planet Arrakis, a position he inherited from his father Pardot Kynes. In this role, a planetologist is described as having skills of an ecologist, geologist, meteorologist, and biologist, as well as basic understandings of human sociology. The planetologists apply this expertise to the study of entire planets.In the Dune series, planetologists are employed to understand planetary resources and to plan terraforming or other planetary-scale engineering projects. This fictional position in Dune has had an impact on the discourse surrounding planetary science itself and is referred to by one author as a "touchstone" within the related disciplines. In one example, a publication by Sybil P. Seitzinger in the journal Nature opens with a brief introduction on the fictional role in Dune, and suggests we should consider appointing individuals with similar skills to Liet-Kyenes to help with managing human activity on Earth.
Professional activity:
Journals Professional bodies This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used.
Division for Planetary Sciences (DPS) of the American Astronomical Society American Geophysical Union Meteoritical Society Europlanet Government space agencies Canadian Space Agency (CSA) China National Space Administration (CNSA, People's Republic of China).
Professional activity:
Centre national d'études spatiales French National Centre of Space Research Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center European Space Agency (ESA) Indian Space Research Organisation (ISRO) Israel Space Agency (ISA) Italian Space Agency Japan Aerospace Exploration Agency (JAXA) NASA (National Aeronautics and Space Administration, United States of America) JPL GSFC Ames National Space Organization (Taiwan).
Professional activity:
Russian Federal Space Agency UK Space Agency (UKSA).
Major conferences Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March.
Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October.
American Geophysical Union (AGU) annual Fall meeting in December in San Francisco.
American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world.
Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe.
European Planetary Science Congress (EPSC), held annually around September at a location within Europe.Smaller workshops and conferences on particular fields occur worldwide throughout the year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Didacticism**
Didacticism:
Didacticism is a philosophy that emphasizes instructional and informative qualities in literature, art, and design. In art, design, architecture, and landscape, didacticism is a conceptual approach that is driven by the urgent need to explain.When applied to ecological questions (for example), didacticism in art, design, architecture and landscape attempts to persuade the viewer of environmental priorities; thus, constituting an entirely new form of explanatory discourse that presents, what can be called "eco-lessons". This concept can be defined as "ecological didacticism".
Overview:
The term has its origin in the Ancient Greek word διδακτικός (didaktikos), "pertaining to instruction", and signified learning in a fascinating and intriguing manner.Didactic art was meant both to entertain and to instruct. Didactic plays, for instance, were intended to convey a moral theme or other rich truth to the audience. During the Middle Age, the Roman Catholic chants like the Veni Creator Spiritus, as well as the Eucharistic hymns like the Adoro te devote and Pange lingua are used for fixing within prayers the truths of the Roman Catholic faith to preserve them and pass down from a generation to another. In the Renaissance, the church began a syncretism between pagan and the Christian didactic art, a syncretism that reflected its dominating temporal power and recalled the controversy among the pagan and Christian aristocracy in the fourth century. An example of didactic writing is Alexander Pope's An Essay on Criticism (1711), which offers a range of advice about critics and criticism. An example of didactism in music is the chant Ut queant laxis, which was used by Guido of Arezzo to teach solfege syllables. Around the 19th century the term didactic came to also be used as a criticism for work that appears to be overburdened with instructive, factual, or otherwise educational information, to the detriment of the enjoyment of the reader (a meaning that was quite foreign to Greek thought). Edgar Allan Poe called didacticism the worst of "heresies" in his essay The Poetic Principle.
Examples:
Some instances of didactic literature include: Instructions of Kagemni, by Kagemni I(?) (2613–2589 BC?) Instruction of Hardjedef, by Hardjedef (between 25th century BC and 24th century BC) The Maxims of Ptahhotep, by Ptahhotep (around 2375-2350 BC) Works and Days, by Hesiod (c. 700 BC) On Horsemanship, by Xenophon (c. 350 BC) The Panchatantra, by Vishnu Sarma (c. 300 BC) De rerum natura, by Lucretius (c. 50 BC) Georgics, by Virgil (c. 30 BC) Ars Poetica by Horace (c. 18 BC) Ars Amatoria, by Ovid (1 BC) Thirukkural, by Thiruvalluvar (between 2nd century BC and 5th century AD) Remedia Amoris, by Ovid (AD 1) Medicamina Faciei Femineae, by Ovid (between 1 BC and AD 8) Astronomica by Marcus Manilius (c. AD 14) Epistulae morales ad Lucilium, by Seneca the Younger, (c. 65 AD) Cynegetica, by Nemesianus (3rd century AD) The Jataka Tales (Buddhist literature, 5th century AD) Philosophus Autodidactus by Ibn Tufail (12th century) Theologus Autodidactus by Ibn al-Nafis (1270s) The Morall Fabillis of Esope the Phrygian (1480s) The Puruṣaparīkṣā by Vidyapati The Pilgrim's Progress, by John Bunyan (1678) Rasselas, by Samuel Johnson (1759) The History of Little Goody Two-Shoes (anonymous, 1765) The Adventures of Nicholas Experience, by Ignacy Krasicki (1776) Critical and Miscellaneous Essays, by Thomas Carlyle (1838–1839) Critical and Historical Essays, by Thomas Babington Macaulay (1843) The Water-Babies, by Charles Kingsley (1863) Fors Clavigera, by John Ruskin (1871–1884) If-, by Rudyard Kipling (1910) Siddhartha, by Hermann Hesse (1952) Sophie's World, by Jostein Gaarder (1991) The Wizard of Gramarye series by Christopher Stasheff (1968-2004) Children's Books in England: Five Centuries of Social Life. by F. J. Harvey DartonSome examples of research that investigates didacticism in art, design, architecture and landscape: "Du Didactisme en Architecture / On Didacticism in Architecture". (2019). In C. Cucuzzella, C. I. Hammond, S. Goubran, & C. Lalonde (Eds.), Cahiers de Recherche du LEAP (Vol. 3). Potential Architecture Books.
Examples:
Cucuzzella, C., Chupin, J.-P., & Hammond, C. (2020). "Eco-didacticism in art and architecture: Design as means for raising awareness". Cities, 102, 102728.Some examples of art, design, architecture and landscape projects that present eco-lessons. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CopA-like RNA**
CopA-like RNA:
CopA-like RNA is a family of non-coding RNAs found on the R1 plasmid.
CopA-like RNA:
In several groups of bacterial plasmids, antisense RNAs regulate copy number through inhibition of replication initiator protein synthesis. These RNAs are characterised by a long hairpin structure interrupted by several unpaired nucleotides or bulged loops. In plasmid R1, the inhibitory complex between the antisense RNA (CopA) and its target mRNA (CopT) is characterised by a four-way junction structure and a side-by-side helical alignment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Stratified squamous epithelium**
Stratified squamous epithelium:
A stratified squamous epithelium consists of squamous (flattened) epithelial cells arranged in layers upon a basal membrane. Only one layer is in contact with the basement membrane; the other layers adhere to one another to maintain structural integrity. Although this epithelium is referred to as squamous, many cells within the layers may not be flattened; this is due to the convention of naming epithelia according to the cell type at the surface. In the deeper layers, the cells may be columnar or cuboidal. There are no intercellular spaces. This type of epithelium is well suited to areas in the body subject to constant abrasion, as the thickest layers can be sequentially sloughed off and replaced before the basement membrane is exposed. It forms the outermost layer of the skin and the inner lining of the mouth, esophagus and vagina.In the epidermis of skin in mammals, reptiles, and birds, the layer of keratin in the outer layer of the stratified squamous epithelial surface is named the stratum corneum. Stratum corneum is made up of squamous cells which are keratinized and dead. These are shed periodically.
Structure:
Non-keratinized Non-keratinized surfaces must be kept moist by bodily secretions to prevent them from drying out. Cells of stratum corneum are sometimes without keratin and living.
Examples of non-keratinized stratified squamous epithelium include some parts of the lining of oral cavity, pharynx, conjunctiva of eye, upper one-third esophagus, rectum, external female genitalia, and vagina. Even non-keratinized surfaces, consisting as they do of keratinocytes, have a minor superficial keratinized layer of varying thickness, depending on the age of the epithelium and the damage it has experienced.
Keratinized Keratinized surfaces are protected from absorption by keratin protein. Keratinized epithelium has keratin deposited on the surface which makes it impermeable and dry. Examples of keratinized stratified squamous epithelium include skin, the epidermis of the palm of the hand, and the sole of the foot, and the masticatory mucosa. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Low molecular-mass organic gelators**
Low molecular-mass organic gelators:
Low molecular-mass organic gelators (LMOGs) are the monomeric sub-unit which form self-assembled fibrillar networks (SAFINs) that entrap solvent between the strands. SAFINs arise from the formation of strong non-covalent interactions between LMOG monomeric sub-units. As SAFINs are forming, the long fibers become intertwined and trap solvent molecules. Once solvent molecules are entrapped within the network, they are immobilized by surface tension effects. The stability of a gel is dependent on the equilibrium between the assembled network and the dissolved gelators. One characteristic of an LMOG, that demonstrates its stability, is its ability to contain an organic solvent at the boiling point of that solvent due to extensive solvent-fibrillar interactions. Gels self-assemble through non-covalent interactions such as π-stacking, hydrogen-bonding, or Van der Waals interactions to form volume-filling 3D networks. Self-assembly is key to gel formation and dependent upon reversible bond formation.
Low molecular-mass organic gelators:
The propensity of a low molecular weight molecule to form LMOGs is classified by its Minimum Gelation Concentration (MGC). The MGC is the lowest possible gelator concentration needed to form a stable gel. A lower MGC is desired to minimize the amount of gelator material needed to form gels. Super gelators have a MGC of less than 1 wt%.
Background and significance:
LMOGs were first reported in the 1930s, but advances in the field were more often than not discoveries of chance; as there existed little theoretical understanding of gel formation. During this time LMOGs found applications in thickening lubricants, printing inks, and napalm. Interest in the field dwindled for several decades until the mid-1990s when Hanabusa, Shinkai, and Hamilton designed numerous LMOGs which form thermoreversible intermolecular amide-carbonyl hydrogen bonds. The LMOGs developed by Hanabusa et al. were suitable for forming hard gels, including gels with chloroform, which had been resistant to gelation prior to their discovery. These new LMOGs were rationally designed and represented the first time that scientists had been able to discover new LMOGs based on supramolecular principles. From these earliest studies and screening numerous compounds, it was determined that for thermoreversible gels based on the amide-carbonyl hydrogen bond, amino acid structure, enantiopurity, hydrophilic-lypophilic ratio, and increasing peptide substitution greatly affected the gelling ability of various new compounds.
Background and significance:
The aforementioned principles that developed in this field's infancy have proved successful in allowing researchers to tune LMOGs for different functions. Today, LMOGs have been extensively studied for their unique properties. This newfound functional diversity has led to a wide range of possible applications for LMOGs in agriculture, drug delivery, pollutant/heavy metal remediation, luminescent devices, and chemical sensing.
Gel formation and morphology:
The majority of LMOGs can be triggered to form by manipulating the systems' properties, such as the pH, solvent, exposure to light, or by introducing oxidizing or reducing reagents. Researchers have proposed a set of guidelines for successful gel formation 1. It is necessary to have the presence of strong self-complementary and unidirectional inter-molecular interactions that can enforce 1D self-assembly.2. The solvent-fiber interfacial energy should be manipulated to control solubility and prevent crystallization of the LMOG.3. Some other factor must be present that can induce the fiber cross-linking network formation.Traditionally, gel phase transitions are strictly temperature dependent. However, it has recently been shown that non-liquid crystalline gelators, composed of (R)-18-(n-alkylamino)octadecan-7-ols (HSN-n), undergo first order gel-to-gel phase transitions; leading to different morphologies of the gel in carbon tetrachloride (CCl4). The uniqueness of this discovery stems from the idea that it is the solvent molecules entering and exiting the structure which leads to the different structural morphologies. All other previously known gel phase transitions have occurred as the result of temperature changes and only one previous case documents this type of solvent dependent morphological change. However, even in the case of N-isopropylacrylamide hydrogels that underwent conformational changes (folding and unfolding of their polymer chains); it occurred only via a temperature dependent process which resulted in water molecules, near the structure, entering or exiting the structure.
Gel formation and morphology:
The stability of a formed gelation matrix is dependent on the equilibrium between the assembled network and the dissolved gelator assemblies. LMOGs are functionally diverse and can be composed of both polar and non-polar regions (amphiphiles).
Gel formation and morphology:
Scanning electron microscopy Scanning Electron Microscopy is a useful means for researchers to determine the structural properties of a low molecular-mass weight gel. These gels exhibit a wide range of structures; from fibrous strands (of various lengths) to ribbons and tubes. The structure of these gels is a key factor in their ability to gel solvents or water. Their tertiary structure determines the critical gelation concentration of the gel.
Rheological measurements:
Generally, rheology is used to study the flow of matter within a substance. In order for a substance to be considered a gel it must possess solid-like traits when characterized by rheological measurements. Rheological characterization, tests materials by applying stress to measure the material's resistance to deformation. From rheological measurements, a gel can be classified as either a "strong" or "weak" gel. This classification emphasizes the strength of the interactions between gelator molecules in a particular gel. A "weak" gel is often not considered a true gel because it does not conform to a purely solid-like material's rheological traits. Instead, "weak" gels are generally better classified as viscoelastic liquids.
Rheological measurements:
As a result of this distinction, these classes of gels demonstrate different elasticity as calculated by the elastic modulus, a mathematical model for predicting the elasticity of different materials under different stressors. The shear modulus (G) of a "strong" gel exhibits a smaller dissipation of energy than "weak" gels, and the "strong" gel's G-values plateau for longer periods of time. Furthermore, rheological properties of different gels can occasionally be used to compare naturally occurring biopolymer gels with synthetic LMOGs.
Interactions of gel and solvent:
Researchers have not been able to reliably predict novel LMOGs. A key aspect in predicting new gelator materials is understanding the interaction between the gel molecules and the solvent. The most common solvents for LMOGs are organic in nature and result in organogels. Much rarer are hydrogels, or gels that form with water as the solvent. Several attempts have been made to quantify the gel and solvent interaction using a variety of parameters: The single or multi-component solubility parameter (δ) of a solvent can give insight on how well-suited the solvent will be for gelation. Depending on the gelator/solvent system, a high solubility parameter can indicate high or low thermal stability of the gel.
Interactions of gel and solvent:
The dielectric constant (ε) reflects the bulk polarity of the solvent.
The Dimroth-Reichardt parameter (ET(30)) is a measure of ionizing power of a solvent.
The Kamlet-Taft solvent parameters establish solvatochromic relationships which measure separately the hydrogen bond donor (α), hydrogen bond acceptor (β), and polarizability (π*) of solvents.
The Hildebrand parameter measures the energy it takes to create a cavity within a solvent.
Applications:
Agricultural industry Pheromone release devices Multiple reservoir-type controlled release devices (CRDs) have been developed to achieve the controlled release of highly volatile pheromones into an agricultural setting; whereby they could act as pesticides throughout the growing season. There are several draw-backs associated with current CRDs because they involve multi-step preparation protocols, exhibit low pheromone-holding capacities, are not biodegradable, and exhibit leaking of the pheromones when compressed or broken. To address these functional issues a sugar alcohol-based amphiphilic super-gelator, mannitol dioctanoate (M8), has been developed that efficiently gelled the pheromones, 2-heptanone and lauryl acetate. The miticide, 2-heptanone controls the parasitic mite, varroa (Varroa destructor), that are responsible for honey bee (Apis mellifera L) colony destruction. The researchers further developed the application of this supergelator by developing a reservoir-type CRD that consisted of the 2-heptanone gel in a vapor-barrier-film sealed pouch which was then activated by boring a small hole through the vapor barrier. The CRD had a high loading capacity of 92% wt/wt allowing for the construction of small devices with a high biocompatibility and because, M8 is composed of mannitol and fatty acids it is also biodegradable. Some amino acid methyl esters, namely valine-, leucine-, and isoleucine-methyl esters, are the sex pheromones of Phyllophaga pests. By crystal engineering approach, these pheromones were reacted with cinnamic acid analogs. The prepared organic salts produced gel in many non-protic solvents and can release sex pheromone slowly. The pests could be trapped physically by keeping these gelator salts inside a trap.
Applications:
Drug delivery Researchers have been exploring LMOGs belonging to a class of molecules called cyclohexane trisamides due to their ability to form hydrogels. By attaching functional groups to the gelator molecule, the researchers can adjust the gelation properties. The gels transition to the liquid state upon changes in temperature or pH Taking it one step further, the researchers attached an amino acid and a model drug to the gel molecule and added an enzyme to the gel matrix. When the temperature or pH was changed, the gel molecules entered the liquid phase where the amino acid and drug molecule could be cleaved from the gel molecule by the enzyme. Researchers believe these LMOGs may some day be used as a fast, two-step release drug delivery system.
Applications:
Pollutant/heavy metal remediation In 2010 researchers developed phase-selective gelators toward the containment and treatment of oil spills. They developed a class of LMOGs that were capable of gelling diesel, gasoline, pump, mineral, and silicone oils. These LMOGs were composed of dialkanoate derivatives of the sugar alcohols, mannitol and sorbitol. These sugar alcohol derivatives were ideal as they are biodegradable, inexpensive, and non-toxic. Once the oil was taken up by the gel fibers; it could then be separated from the gel by utilizing vacuum distillation and furthermore the gelator could be recycled.
Applications:
Luminescent devices Some gels can be used in luminescent devices such as OLED's and/or fluorescent sensors. One example of an OLED type LMOG is mono-substituted ethynyl-pyrene. This gelator forms a stable gel with DMF, toluene, or cyclohexane while maintaining its luminescence. Another important characteristic of these gels is that they maintain high charge carrier mobility. This means that the gel can pass sufficient current in an electronic luminescent device.Furthermore, luminescent gels can also be utilized as sensors. These sensors operate by forming a stable luminescent gel in the presence of different analytes. One example of a luminescent gel for sensing fluoride anions is presented by Prasad and Rajamalli. This example utilizes poly(aryl ether) dendrons attached to a core aryl ether with [anthracene] attached. Upon forming a stable yellow gel (under normal gelation conditions), if fluoride is introduced in the presence of the gel, the gel undergoes a gel to sol transition and becomes bright red. Being able to visually detect a color change in the presence of a dilute analyte is a promising field application of LMOG materials.
Applications:
Chemical sensing Molecular gels can be sensitized toward an external stimuli aka light, heat, or chemicals. Also, LMOG's can be sensitized by the incorporation of a receptor unit or a spectroscopically active unit into the gelator molecule. A variety of quinoxalinones were recently developed that act as a mercury sensor by forming a gel when these ligands complex to mercury. A nonplanar dihydropyridine derivative was induced to gel upon oxidizing the molecule with nitric oxide and then dissolving the oxidized ligand in DMSO/water and then heating and cooling the mixture. This gel has a useful application as it can therefore act as a nitric oxide sensor.
Applications:
Gel Sculpture Supramolecular gel also can be employed for preparing the gel sculpture. A supramolecular gelator, namely dicyclohexyammonium Boc-glycinate produced gel in nitrobenzene and exhibited self-healing and load-bearing property. This gel was used to create the gel sculptor of "Mother and Child", which is preserved in Kolkata (Indian Association for the Cultivation of Science), India. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Latin American Diet Pyramid**
Latin American Diet Pyramid:
The Latin American Diet Pyramid is a nutrition guide that was developed by Oldways and scientific advisers from the Harvard School of Public Health, the Baylor College of Medicine, and the Latin American Summit Scientific Committee in 2005. It is a tradition-based diet that suggests the types and frequency of foods that should be enjoyed every day.
This pyramid is based on two distinct historical periods of the culinary evolution of the peoples of Latin America.
Latin American Diet Pyramid:
The first period describes the dietary traditions of regions inhabited primarily by three high cultures of aboriginal Latin Americans: the Aztec, the Inca, and the Maya. The second period describes the dietary traditions that emerged following the arrival of Columbus, at about 1500, to the present time. The dietary patterns followed today by the people of Latin America find their roots in both of these historical culinary patterns.
Latin American Diet Pyramid:
The selection of these peoples and of these time periods as a basis for the design follows from these considerations: A consistency with patterns of other healthy population groups of the world; Availability of data describing the character of food consumption patterns of the areas at that time; and The convergence of the dietary patterns revealed by these data and our current understanding of optimal nutrition based on world-wide epidemiological studies and clinical trials.Variations of these diets have traditionally existed in other parts of Central America, South America, the Caribbean, and the southern edge United States. For the purposes of this research, the aforementioned regions are considered as part of Latin America. They are closely related to traditional areas of maize, potato, peanut, and dry bean cultivation in the Latin American region.
Latin American Diet Pyramid:
Given these carefully defined parameters of geography and time, the phrase traditional Latin American diet is used here as a shorthand for those traditional diets of these regions and peoples during two specific time periods that are historically associated with good health.
Latin American Diet Pyramid:
The design of the Latin American Diet Pyramid is not based solely on either the weight or the percentage of energy (calories) that foods account for in the diet, but on a blend of these that is meant to give relative proportions and a general sense of frequency of servings, as well as an indication of which foods to favour in a healthy Latin American-style diet.
Resources:
National Diabetes Information Clearinghouse. National diabetes statistics. NIH publication 02–3892. 2002. Fact sheet. http://www.niddk.nih.gov/health/diabetes/pubs/dmstats/dmstats.htm. Accessed April 4, 2002. Archived May 17, 2008, at the Wayback Machine Latino Nutrition Coalition Latin American Diet Summit Consensus Statement | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fires (military)**
Fires (military):
Fires is the related tasks and systems that provide collective and coordinated use of Army indirect fires, air and missile defense, and joint fires through the targeting process. Alternatively, it can be defined as the use of weapon systems to create a specific lethal or nonlethal effect on a target.Fires has traditionally focused on fire support systems such as artillery and close air support, but is increasingly being used to refer to non-lethal systems including information operations, cyberwarfare, and civilian-military relationships.
Warfighting Function:
Fires is one of the six warfighting functions defined by the US Army, which also include movement and maneuver, intelligence, sustainment, command and control, and protection. The fires warfighting function is the related tasks and systems that provide collective and coordinated use of Army indirect fires, AMD, and joint fires through the targeting process. Army fires systems deliver fires in support of offensive and defensive tasks to create specific lethal and nonlethal effects on a target. The fires warfighting function as defined by the Army includes the following tasks: Deliver fires.
Warfighting Function:
Integrate all forms of Army, joint and multinational fires.
Warfighting Function:
Conduct targeting.The Marine Corps defines the fires warfighting function as "Fires harass, suppress, neutralize, or destroy in order to accomplish the targeting objective, which may be to disrupt, delay, limit, persuade, or influence. Fires include the collective and coordinated use of target acquisition systems, direct and indirect fire weapons, armed aircraft of all types, and other lethal and nonlethal means. Fires are normally used in concert with maneuver, which helps shape the battlespace, setting conditions for decisive action." Harassment, suppression, neutralization, and destruction are key words used in targeting to define the impact of the weapon system on the target. Persuade and influence are tasks related to nonlethal fires such as influence operations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Global Defence Force Tactics**
Global Defence Force Tactics:
Global Defence Force Tactics, known in Japan as Earth Defense Force Tactics, is a PlayStation 2 turn-based strategy game developed by thinkArts.
Gameplay:
Players assume the role of GDF Commander and control GDF units in turn-based missions against the giant bug menace. Missions take place on 2D hex-maps, with attacks depicted by brief animated cutscenes.
The game has 50 stages and 250 different weapons.
Reception:
Reception for the game was negative. In Japan, Famitsu gave it a score of one three, one five, one six, and one five for a total of 19 out of 40. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Auscultatory gap**
Auscultatory gap:
An auscultatory gap, also known as the silent gap, is a period of diminished or absent Korotkoff sounds during the manual measurement of blood pressure. It is associated with reduced peripheral blood flow caused by changes in the pulse wave. The improper interpretation of this gap may lead to blood pressure monitoring errors, such as an underestimation of systolic blood pressure and/or an overestimation of diastolic blood pressure. In order to correct for an auscultatory gap, the radial pulse should be monitored by palpation. It is therefore recommended to palpate and auscultate when manually recording a patient's blood pressure. Typically, the blood pressure obtained via palpation is around 10 mmHg lower than the pressure obtained via auscultation. In general, the examiner can avoid being confused by an auscultatory gap by always inflating a blood pressure cuff to 20-40 mmHg higher than the pressure required to occlude the brachial pulse.
Cause:
There is evidence that auscultatory gaps are related to carotid atherosclerosis, and to increased arterial stiffness in hypertensive patients. This appears to be independent of age. Another cause is believed to be venous stasis within the limb that is being used for the measurement. Although these observations need to be confirmed prospectively, they suggest that auscultatory gaps may have prognostic relevance.
History:
Auscultatory gap was first officially noted in 1918. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Webcomic**
Webcomic:
Webcomics (also known as online comics or Internet comics) are comics published on a website or mobile app. While many are published exclusively on the web, others are also published in magazines, newspapers, or comic books.
Webcomic:
Webcomics can be compared to self-published print comics in that anyone with an Internet connection can publish their own webcomic. Readership levels vary widely; many are read only by the creator's immediate friends and family, while some of the largest claim audiences well over one million readers. Webcomics range from traditional comic strips and graphic novels to avant garde comics, and cover many genres, styles, and subjects. They sometimes take on the role of a comic blog. The term web cartoonist is sometimes used to refer to someone who creates webcomics.
Medium:
There are several differences between webcomics and print comics. With webcomics the restrictions of traditional books, newspapers or magazines can be lifted, allowing artists and writers to take advantage of the web's unique capabilities.
Medium:
Styles The creative freedom webcomics provide allows artists to work in nontraditional styles. Clip art or photo comics (also known as fumetti) are two types of webcomics that do not use traditional artwork. A Softer World, for example, is made by overlaying photographs with strips of typewriter-style text. As in the constrained comics tradition, a few webcomics, such as Dinosaur Comics by Ryan North, are created with most strips having art copied exactly from one (or a handful of) template comics and only the text changing. Pixel art, such as that created by Richard Stevens of Diesel Sweeties, is similar to that of sprite comics but instead uses low-resolution images created by the artist themself. However, it is also common for some artists to use traditional styles, similar to those typically published in newspapers or comic books.
Medium:
Content Webcomics that are independently published are not subject to the content restrictions of book publishers or newspaper syndicates, enjoying an artistic freedom similar to underground and alternative comics. Some webcomics stretch the boundaries of taste, taking advantage of the fact that internet censorship is virtually nonexistent in countries like the United States. The content of webcomics can still cause problems, such as Leisure Town artist Tristan Farnon's legal trouble after creating a profane Dilbert parody, or the Catholic League's protest of artist Eric Millikin's "blasphemous treatment of Jesus." Format Webcomic artists use many formats throughout the world. Comic strips, generally consisting of three or four panels, have been a common format for many artists. Other webcomic artists use the format of traditional printed comic books and graphic novels, sometimes with the plan of later publishing books.
Medium:
Scott McCloud, one of the first advocates of webcomics, pioneered the idea of the "infinite canvas" where, rather than being confined to normal print dimensions, artists are free to spread out in any direction indefinitely with their comics. Such a format proved highly successful in South-Korean webcomics when JunKoo Kim implemented an infinite scrolling mechanism in the platform Webtoon in 2004. In 2009, French web cartoonist Balak described Turbomedia, a format for webcomics where a reader only views one panel at a time, in which the reader decides their own reading rhythm by going forward one panel at a time. Some web cartoonists, such as political cartoonist Mark Fiore or Charley Parker with Argon Zark!, incorporate animations or interactive elements into their webcomics.
History:
The first comics to be shared through the Internet were Eric Millikin's Witches and Stitches, which he started uploading on CompuServe in 1985. Services such as CompuServe and Usenet were used before the World Wide Web started to rise in popularity in 1993. Early webcomics were often derivatives from strips in college newspapers, but when the Web became widely popular in the mid-1990s, more people started creating comics exclusively for this medium. By 2000, various webcomic creators were financially successful and webcomics became more artistically recognized. Unique genres and styles became popular during this period.
History:
The 2010s also saw the rise of webtoons in South Korea, where the form has become very prominent. This decade has also seen an increasingly larger number of successful webcomics being adapted into animated series in China and Japan.
Webcomics collectives:
In March 1995, artist Bebe Williams launched one of the first webcomics collectives, Art Comics Daily. Newspaper comic strip syndicates also launched websites in the mid-1990s.
Other webcomics collectives followed, with many launching in the next decade. In March 2000, Chris Crosby, Crosby's mother Teri, and other artists founded Keenspot. In July 2000, Austin Osueke launched eigoMANGA, publishing original online manga, referred to as "webmanga".
In 2001, the subscription webcomics site Cool Beans World was launched. Contributors included UK-based comic book creators Pat Mills, Simon Bisley, John Bolton, and Kevin O'Neill, and the author Clive Barker. Serialised content included Scarlet Traces and Marshal Law.
In March 2001, Shannon Denton and Patrick Coyle launched Komikwerks.com serving free strips from comics and animation professionals. The site launched with 9 titles including Steve Conley's Astounding Space Thrills, Jason Kruse's The World of Quest, and Bernie Wrightson's The Nightmare Expeditions.
On March 2, 2002, Joey Manley founded Modern Tales, offering subscription-based webcomics. The Modern Tales spin-off serializer followed in October 2002, then came girlamatic and Graphic Smash in March and September 2003 respectively.
Webcomics collectives:
By 2005, webcomics hosting had become a business in its own right, with sites such as Webcomics Nation.Traditional comic book publishers, such as Marvel Comics and Slave Labour Graphics, did not begin making serious digital efforts until 2006 and 2007. DC Comics launched its web comic imprint, Zuda Comics in October 2007. The site featured user submitted comics in a competition for a professional contract to produce web comics. In July 2010, it was announced that DC was closing down Zuda.
Business:
Some creators of webcomics are able to do so professionally through various revenue channels. Webcomic artists may sell merchandise based on their work, such as T-shirts and toys, or they may sell print versions or compilations of their webcomic. Webcomic creators can also sell online advertisements on their websites. In the second half of the 2000s, webcomics became less financially sustainable due to the rise of social media and consumers' disinterest in certain kinds of merchandise. Crowdfunding through Kickstarter and Patreon have also become sources of income for web cartoonists.Webcomics have been used by some cartoonists as a path towards syndication in newspapers. Since the mid-1990s, Scott McCloud advocated for micropayments systems as a source of income for web cartoonists, but micropayment systems have not been popular with artists or readers.
Awards:
Many webcomics artists have received honors for their work. In 2006, Gene Luen Yang's graphic novel American Born Chinese, originally published as a webcomic on Modern Tales, was the first graphic novel to be nominated for a National Book Award. Don Hertzfeldt's animated film based on his webcomics, Everything Will Be OK, won the 2007 Sundance Film Festival Jury Award in Short Filmmaking, a prize rarely bestowed on an animated film.Many traditionally print-comics focused organizations have added award categories for comics published on the web. The Eagle Awards established a Favorite Web-based Comic category in 2000, and the Ignatz Awards followed the next year by introducing an Outstanding Online Comic category in 2001. After having nominated webcomics in several of their traditional print-comics categories, the Eisner Awards began awarding comics in the Best Digital Comic category in 2005. In 2006 the Harvey Awards established a Best Online Comics Work category, and in 2007 the Shuster Awards began an Outstanding Canadian Web Comic Creator Award. In 2012 the National Cartoonists Society gave their first Reuben Award for "On-line comic strips."Other awards focus exclusively on webcomics. The Web Cartoonists' Choice Awards consist of a number of awards that were handed out annually from 2001 to 2008. The Dutch Clickburg Webcomic Awards (also known as the Clickies) has been handed out four times between 2005 and 2010. The awards require the recipient to be active in the Benelux countries, with the exception of one international award.
Webcomics in print:
Though webcomics are typically published primarily on the World Wide Web, often webcomic creators decide to also print self-published books of their work. In some cases, web cartoonists may get publishing deals in which comic books are created of their work. Sometimes, these books are published by mainstream comics publishers who are traditionally aimed at the direct market of comic books stores. Some web cartoonists may pursue print syndication in established newspapers or magazines.
Webcomics in print:
The traditional audience base for webcomics and print comics are vastly different, and webcomic readers do not necessarily go to bookstores. For some web cartoonists, a print release may be considered the "goal" of a webcomic series, while for others, comic books are "just another way to get the content out." Webcomics have been seen by some artists as a potential new path towards syndication in newspapers. According to Jeph Jacques (Questionable Content), "there's no real money" in syndication for webcomic artists. Some artists are not able to syndicate their work in newspapers because their comics are targeted to a specific niche audience and wouldn't be popular with a broader readership.
Non-anglophone webcomics:
Many webcomics are published primarily in English, this being a major language in Australia, Canada, India, the United States, and the United Kingdom. Cultures surrounding non-anglophone webcomics have thrived in countries such as China, France, India, Japan, and South Korea.
Non-anglophone webcomics:
Webcomics have been a popular medium in India since the early 2000s. Indian webcomics are successful as they reach a large audience for free and they are frequently used by the country's younger generation to spread social awareness on topics such as politics and feminism. These webcomics achieve a large amount of exposure by being spread through social media.In China, Chinese webcomics have become a popular way to criticize the communist government and politicians in the country. Many webcomics by popular artists get shared around the country thanks to social networks such as Sina Weibo and WeChat. Many titles will often be censored or taken down by the government. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Light-dependent reactions**
Light-dependent reactions:
Light-dependent reactions is jargon for certain photochemical reactions that are involved in photosynthesis, the main process by which plants acquire energy. There are two light dependent reactions, the first occurs at photosystem II (PSII) and the second occurs at photosystem I (PSI), PSII absorbs a photon to produce a so-called high energy electron which transfers via an electron transport chain to cytochrome b6f and then to PSI. The then-reduced PSI, absorbs another photon producing a more highly reducing electron, which converts NADP+ to NADPH. In oxygenic photosynthesis, the first electron donor is water, creating oxygen (O2) as a by-product. In anoxygenic photosynthesis various electron donors are used.
Light-dependent reactions:
Cytochrome b6f and ATP synthase work together to produce ATP (photophosphorylation) in two distinct ways. In non-cyclic photophosphorylation, cytochrome b6f uses electrons from PSII and energy from PSI to pump protons from the stroma to the lumen. The resulting proton gradient across the thylakoid membrane creates a proton-motive force, used by ATP synthase to form ATP. In cyclic photophosphorylation, cytochrome b6f uses electrons and energy from PSI to create more ATP and to stop the production of NADPH. Cyclic phosphorylation is important to create ATP and maintain NADPH in the right proportion for the light-independent reactions.
Light-dependent reactions:
The net-reaction of all light-dependent reactions in oxygenic photosynthesis is: 2H2O + 2NADP+ + 3ADP + 3Pi → O2 + 2 H+ + 2NADPH + 3ATPPSI and PSII are light-harvesting complexes. If a special pigment molecule in a photosynthetic reaction center absorbs a photon, an electron in this pigment attains the excited state and then is transferred to another molecule in the reaction center. This reaction, called photoinduced charge separation, is the start of the electron flow and transforms light energy into chemical forms.
Light dependent reactions:
In chemistry, many reactions depend on the absorption of photons to provide the energy needed to overcome the activation energy barrier and hence can be labelled light-dependent. Such reactions range from the silver halide reactions used in photographic film to the creation and destruction of ozone in the upper atmosphere. This article discusses a specific subset of these, the series of light-dependent reactions related to photosynthesis in living organisms.
The reaction center:
The reaction center is in the thylakoid membrane. It transfers absorbed light energy to a dimer of chlorophyll pigment molecules near the periplasmic (or thylakoid lumen) side of the membrane. This dimer is called a special pair because of its fundamental role in photosynthesis. This special pair is slightly different in PSI and PSII reaction centers. In PSII, it absorbs photons with a wavelength of 680 nm, and is therefore called P680. In PSI, it absorbs photons at 700 nm and is called P700. In bacteria, the special pair is called P760, P840, P870, or P960. "P" here means pigment, and the number following it is the wavelength of light absorbed.
The reaction center:
Electrons in pigment molecules can exist at specific energy levels. Under normal circumstances, they are at the lowest possible energy level, the ground state. However, absorption of light of the right photon energy can lift them to a higher energy level. Any light that has too little or too much energy cannot be absorbed and is reflected. The electron in the higher energy level is unstable and will quickly return to its normal lower energy level. To do this, it must release the absorbed energy. This can happen in various ways. The extra energy can be converted into molecular motion and lost as heat, or re-emitted by the electron as light (fluorescence). The energy, but not the electron itself, may be passed onto another molecule; this is called resonance energy transfer. If an electron of the special pair in the reaction center becomes excited, it cannot transfer this energy to another pigment using resonance energy transfer. Under normal circumstances, the electron would return to the ground state, but because the reaction center is arranged so that a suitable electron acceptor is nearby, the excited electron is taken up by the acceptor. The loss of the electron gives the special pair a positive charge and, as an ionization process, further boosts its energy. The formation of a positive charge on the special pair and a negative charge on the acceptor is referred to as photoinduced charge separation. The electron can be transferred to another molecule. As the ionized pigment returns to the ground state, it takes up an electron and gives off energy to the oxygen evolving complex so it can split water into electrons, protons, and molecular oxygen (after receiving energy from the pigment four times). Plant pigments usually utilize the last two of these reactions to convert the sun's energy into their own.
The reaction center:
This initial charge separation occurs in less than 10 picoseconds (10-11 seconds). In their high-energy states, the special pigment and the acceptor could undergo charge recombination; that is, the electron on the acceptor could move back to neutralize the positive charge on the special pair. Its return to the special pair would waste a valuable high-energy electron and simply convert the absorbed light energy into heat. In the case of PSII, this backflow of electrons can produce reactive oxygen species leading to photoinhibition. Three factors in the structure of the reaction center work together to suppress charge recombination nearly completely: Another electron acceptor is less than 1 nanometer away from the first acceptor, and so the electron is rapidly transferred farther away from the special pair.
The reaction center:
An electron donor is less than 1 nm away from the special pair, and so the positive charge is neutralized by the transfer of another electron.
The reaction center:
The electron transfer back from the electron acceptor to the positively charged special pair is especially slow. The rate of an electron transfer reaction increases with its thermodynamic favorability up to a point and then decreases. The back transfer is so favorable that it takes place in the inverted region where electron-transfer rates become slower.Thus, electron transfer proceeds efficiently from the first electron acceptor to the next, creating an electron transport chain that ends when it has reached NADPH.
In chloroplasts:
The photosynthesis process in chloroplasts begins when an electron of P680 of PSII attains a higher-energy level. This energy is used to reduce a chain of electron acceptors that have subsequently higher redox potentials. This chain of electron acceptors is known as an electron transport chain. When this chain reaches PSI, an electron is again excited, creating a high redox-potential. The electron transport chain of photosynthesis is often put in a diagram called the Z-scheme, because the redox diagram from P680 to P700 resembles the letter Z.The final product of PSII is plastoquinol, a mobile electron carrier in the membrane. Plastoquinol transfers the electron from PSII to the proton pump, cytochrome b6f. The ultimate electron donor of PSII is water. Cytochrome b6f transfers the electron chain to PSI through plastocyanin molecules. PSI can continue the electron transfer in two different ways. It can transfer the electrons either to plastoquinol again, creating a cyclic electron flow, or to an enzyme called FNR (Ferredoxin—NADP(+) reductase), creating a non-cyclic electron flow. PSI releases FNR into the stroma, where it reduces NADP+ to NADPH.
In chloroplasts:
Activities of the electron transport chain, especially from cytochrome b6f, lead to pumping of protons from the stroma to the lumen. The resulting transmembrane proton gradient is used to make ATP via ATP synthase.
In chloroplasts:
The overall process of the photosynthetic electron transport chain in chloroplasts is: Photosystem II PSII is extremely complex, a highly organized transmembrane structure that contains a water splitting complex, chlorophylls and carotenoid pigments, a reaction center (P680), pheophytin (a pigment similar to chlorophyll), and two quinones. It uses the energy of sunlight to transfer electrons from water to a mobile electron carrier in the membrane called plastoquinone: Plastoquinol, in turn, transfers electrons to cyt b6f, which feeds them into PSI.
In chloroplasts:
The water-splitting complex The step H2O → P680 is performed by an imperfectly understood structure embedded within PSII called the water-splitting complex or oxygen-evolving complex (OEC). It catalyzes a reaction that splits water into electrons, protons and oxygen, using energy from P680+. The actual steps of the above reaction possibly occur in the following way (Kok's diagram of S-states): (I) 2H2O (monoxide) (II) OH. H2O (hydroxide) (III) H2O2 (peroxide) (IV)HO2 (super oxide)(V) O2 (di-oxygen). (Dolai's mechanism) The electrons are transferred to special chlorophyll molecules (embedded in PSII) that are promoted to a higher-energy state by the energy of photons.
In chloroplasts:
The reaction center The excitation P680 → P680* of the reaction center pigment P680 occurs here. These special chlorophyll molecules embedded in PSII absorb the energy of photons, with maximal absorption at 680 nm. Electrons within these molecules are promoted to a higher-energy state. This is one of two core processes in photosynthesis, and it occurs with astonishing efficiency (greater than 90%) because, in addition to direct excitation by light at 680 nm, the energy of light first harvested by antenna proteins at other wavelengths in the light-harvesting system is also transferred to these special chlorophyll molecules.
In chloroplasts:
This is followed by the electron transfer P680*→ pheophytin, and then on to plastoquinol, which occurs within the reaction center of PSII. The electrons are transferred to plastoquinone and two protons, generating plastoquinol, which released into the membrane as a mobile electron carrier. This is the second core process in photosynthesis. The initial stages occur within picoseconds, with an efficiency of 100%. The seemingly impossible efficiency is due to the precise positioning of molecules within the reaction center. This is a solid-state process, not a typical chemical reaction. It occurs within an essentially crystalline environment created by the macromolecular structure of PSII. The usual rules of chemistry (which involve random collisions and random energy distributions) do not apply in solid-state environments.
In chloroplasts:
Link of water-splitting complex and chlorophyll excitation When the excited chlorophyll P680* passes the electron to pheophytin, it converts to high-energy P680+, which can oxidize the tyrosineZ (or YZ) molecule by ripping off one of its hydrogen atoms. The high-energy oxidized tyrosine gives off its energy and returns to the ground state by taking up a proton and removing an electron from the oxygen-evolving complex and ultimately from water. Kok's S-state diagram shows the reactions of water splitting in the oxygen-evolving complex.
In chloroplasts:
Summary PSII is a transmembrane structure found in all chloroplasts. It splits water into electrons, protons and molecular oxygen. The electrons are transferred to plastoquinol, which carries them to a proton pump. The oxygen is released into the atmosphere.
The emergence of such an incredibly complex structure, a macromolecule that converts the energy of sunlight into chemical energy and thus potentially useful work with efficiencies that are impossible in ordinary experience, seems almost magical at first glance. Thus, it is of considerable interest that, in essence, the same structure is found in purple bacteria.
In chloroplasts:
Cytochrome b6f PSII and PSI are connected by a transmembrane proton pump, cytochrome b6f complex (plastoquinol—plastocyanin reductase; EC 1.10.99.1). Electrons from PSII are carried by plastoquinol to cyt b6f, where they are removed in a stepwise fashion (re-forming plastoquinone) and transferred to a water-soluble electron carrier called plastocyanin. This redox process is coupled to the pumping of four protons across the membrane. The resulting proton gradient (together with the proton gradient produced by the water-splitting complex in PSI) is used to make ATP via ATP synthase.
In chloroplasts:
The structure and function of cytochrome b6f (in chloroplasts) is very similar to cytochrome bc1 (Complex III in mitochondria). Both are transmembrane structures that remove electrons from a mobile, lipid-soluble electron carrier (plastoquinone in chloroplasts; ubiquinone in mitochondria) and transfer them to a mobile, water-soluble electron carrier (plastocyanin in chloroplasts; cytochrome c in mitochondria). Both are proton pumps that produce a transmembrane proton gradient. In fact, cytochrome b6 and subunit IV are homologous to mitochondrial cytochrome b and the Rieske iron-sulfur proteins of the two complexes are homologous. However, cytochrome f and cytochrome c1 are not homologous.
In chloroplasts:
Photosystem I PSI accepts electrons from plastocyanin and transfers them either to NADPH (noncyclic electron transport) or back to cytochrome b6f (cyclic electron transport): plastocyanin → P700 → P700* → FNR → NADPH ↑ ↓ b6f ← phylloquinone PSI, like PSII, is a complex, highly organized transmembrane structure that contains antenna chlorophylls, a reaction center (P700), phylloquinone, and a number of iron-sulfur proteins that serve as intermediate redox carriers.
In chloroplasts:
The light-harvesting system of PSI uses multiple copies of the same transmembrane proteins used by PSII. The energy of absorbed light (in the form of delocalized, high-energy electrons) is funneled into the reaction center, where it excites special chlorophyll molecules (P700, with maximum light absorption at 700 nm) to a higher energy level. The process occurs with astonishingly high efficiency.
In chloroplasts:
Electrons are removed from excited chlorophyll molecules and transferred through a series of intermediate carriers to ferredoxin, a water-soluble electron carrier. As in PSII, this is a solid-state process that operates with 100% efficiency.
In chloroplasts:
There are two different pathways of electron transport in PSI. In noncyclic electron transport, ferredoxin carries the electron to the enzyme ferredoxin NADP+ reductase (FNR) that reduces NADP+ to NADPH. In cyclic electron transport, electrons from ferredoxin are transferred (via plastoquinol) to a proton pump, cytochrome b6f. They are then returned (via plastocyanin) to P700. NADPH and ATP are used to synthesize organic molecules from CO2. The ratio of NADPH to ATP production can be adjusted by adjusting the balance between cyclic and noncyclic electron transport.
In chloroplasts:
It is noteworthy that PSI closely resembles photosynthetic structures found in green sulfur bacteria, just as PSII resembles structures found in purple bacteria.
In bacteria:
PSII, PSI, and cytochrome b6f are found in chloroplasts. All plants and all photosynthetic algae contain chloroplasts, which produce NADPH and ATP by the mechanisms described above. In essence, the same transmembrane structures are also found in cyanobacteria.
Unlike plants and algae, cyanobacteria are prokaryotes. They do not contain chloroplasts; rather, they bear a striking resemblance to chloroplasts themselves. This suggests that organisms resembling cyanobacteria were the evolutionary precursors of chloroplasts. One imagines primitive eukaryotic cells taking up cyanobacteria as intracellular symbionts in a process known as endosymbiosis.
In bacteria:
Cyanobacteria Cyanobacteria contain both PSI and PSII. Their light-harvesting system is different from that found in plants (they use phycobilins, rather than chlorophylls, as antenna pigments), but their electron transport chain H2O → PSII → plastoquinol → b6f → cytochrome c6 → PSI → ferredoxin → NADPH ↑ ↓ b6f ← plastoquinol is, in essence, the same as the electron transport chain in chloroplasts. The mobile water-soluble electron carrier is cytochrome c6 in cyanobacteria, having been replaced by plastocyanin in plants.Cyanobacteria can also synthesize ATP by oxidative phosphorylation, in the manner of other bacteria. The electron transport chain is NADH dehydrogenase → plastoquinol → b6f → cyt c6 → cyt aa3 → O2 where the mobile electron carriers are plastoquinol and cytochrome c6, while the proton pumps are NADH dehydrogenase, cyt b6f and cytochrome aa3 (member of the COX3 family).
In bacteria:
Cyanobacteria are the only bacteria that produce oxygen during photosynthesis. Earth's primordial atmosphere was anoxic. Organisms like cyanobacteria produced our present-day oxygen-containing atmosphere.
The other two major groups of photosynthetic bacteria, purple bacteria and green sulfur bacteria, contain only a single photosystem and do not produce oxygen.
In bacteria:
Purple bacteria Purple bacteria contain a single photosystem that is structurally related to PSII in cyanobacteria and chloroplasts: P870 → P870* → ubiquinone → cyt bc1 → cyt c2 → P870This is a cyclic process in which electrons are removed from an excited chlorophyll molecule (bacteriochlorophyll; P870), passed through an electron transport chain to a proton pump (cytochrome bc1 complex; similar to the chloroplastic one), and then returned to the chlorophyll molecule. The result is a proton gradient that is used to make ATP via ATP synthase. As in cyanobacteria and chloroplasts, this is a solid-state process that depends on the precise orientation of various functional groups within a complex transmembrane macromolecular structure.
In bacteria:
To make NADPH, purple bacteria use an external electron donor (hydrogen, hydrogen sulfide, sulfur, sulfite, or organic molecules such as succinate and lactate) to feed electrons into a reverse electron transport chain.
In bacteria:
Green sulfur bacteria Green sulfur bacteria contain a photosystem that is analogous to PSI in chloroplasts: P840 → P840* → ferredoxin → NADH ↑ ↓ cyt c553 ← bc1 ← menaquinol There are two pathways of electron transfer. In cyclic electron transfer, electrons are removed from an excited chlorophyll molecule, passed through an electron transport chain to a proton pump, and then returned to the chlorophyll. The mobile electron carriers are, as usual, a lipid-soluble quinone and a water-soluble cytochrome. The resulting proton gradient is used to make ATP.
In bacteria:
In noncyclic electron transfer, electrons are removed from an excited chlorophyll molecule and used to reduce NAD+ to NADH. The electrons removed from P840 must be replaced. This is accomplished by removing electrons from H2S, which is oxidized to sulfur (hence the name "green sulfur bacteria").
Purple bacteria and green sulfur bacteria occupy relatively minor ecological niches in the present day biosphere. They are of interest because of their importance in precambrian ecologies, and because their methods of photosynthesis were the likely evolutionary precursors of those in modern plants.
History:
The first ideas about light being used in photosynthesis were proposed by Colin Flannery in 1779 who recognized it was sunlight falling on plants that was required, although Joseph Priestley had noted the production of oxygen without the association with light in 1772. Cornelis Van Niel proposed in 1931 that photosynthesis is a case of general mechanism where a photon of light is used to photo decompose a hydrogen donor and the hydrogen being used to reduce CO2.
History:
Then in 1939, Robin Hill demonstrated that isolated chloroplasts would make oxygen, but not fix CO2, showing the light and dark reactions occurred in different places. Although they are referred to as light and dark reactions, both of them take place only in the presence of light. This led later to the discovery of photosystems I and II. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jones calculus**
Jones calculus:
In optics, polarized light can be described using the Jones calculus, discovered by R. C. Jones in 1941. Polarized light is represented by a Jones vector, and linear optical elements are represented by Jones matrices. When light crosses an optical element the resulting polarization of the emerging light is found by taking the product of the Jones matrix of the optical element and the Jones vector of the incident light.
Jones calculus:
Note that Jones calculus is only applicable to light that is already fully polarized. Light which is randomly polarized, partially polarized, or incoherent must be treated using Mueller calculus.
Jones vector:
The Jones vector describes the polarization of light in free space or another homogeneous isotropic non-attenuating medium, where the light can be properly described as transverse waves. Suppose that a monochromatic plane wave of light is travelling in the positive z-direction, with angular frequency ω and wave vector k = (0,0,k), where the wavenumber k = ω/c. Then the electric and magnetic fields E and H are orthogonal to k at each point; they both lie in the plane "transverse" to the direction of motion. Furthermore, H is determined from E by 90-degree rotation and a fixed multiplier depending on the wave impedance of the medium. So the polarization of the light can be determined by studying E. The complex amplitude of E is written (Ex(t)Ey(t)0)=(E0xei(kz−ωt+ϕx)E0yei(kz−ωt+ϕy)0)=(E0xeiϕxE0yeiϕy0)ei(kz−ωt).
Jones vector:
Note that the physical E field is the real part of this vector; the complex multiplier serves up the phase information. Here i is the imaginary unit with i2=−1 The Jones vector is (E0xeiϕxE0yeiϕy).
Thus, the Jones vector represents the amplitude and phase of the electric field in the x and y directions.
Jones vector:
The sum of the squares of the absolute values of the two components of Jones vectors is proportional to the intensity of light. It is common to normalize it to 1 at the starting point of calculation for simplification. It is also common to constrain the first component of the Jones vectors to be a real number. This discards the overall phase information that would be needed for calculation of interference with other beams.
Jones vector:
Note that all Jones vectors and matrices in this article employ the convention that the phase of the light wave is given by ϕ=kz−ωt , a convention used by Hecht. Under this convention, increase in ϕx (or ϕy ) indicates retardation (delay) in phase, while decrease indicates advance in phase. For example, a Jones vectors component of i (=eiπ/2 ) indicates retardation by π/2 (or 90 degree) compared to 1 ( =e0 ). Collett uses the opposite definition for the phase ( ϕ=ωt−kz ). Also, Collet and Jones follow different conventions for the definitions of handedness of circular polarization. Jones' convention is called: "From the point of view of the receiver", while Collett's convention is called: "From the point of view of the source." The reader should be wary of the choice of convention when consulting references on the Jones calculus.
Jones vector:
The following table gives the 6 common examples of normalized Jones vectors.
Jones vector:
A general vector that points to any place on the surface is written as a ket |ψ⟩ . When employing the Poincaré sphere (also known as the Bloch sphere), the basis kets ( |0⟩ and |1⟩ ) must be assigned to opposing (antipodal) pairs of the kets listed above. For example, one might assign |0⟩ = |H⟩ and |1⟩ = |V⟩ . These assignments are arbitrary. Opposing pairs are |H⟩ and |V⟩ |D⟩ and |A⟩ |R⟩ and |L⟩ The polarization of any point not equal to |R⟩ or |L⟩ and not on the circle that passes through |H⟩,|D⟩,|V⟩,|A⟩ is known as elliptical polarization.
Jones matrices:
The Jones matrices are operators that act on the Jones vectors defined above. These matrices are implemented by various optical elements such as lenses, beam splitters, mirrors, etc. Each matrix represents projection onto a one-dimensional complex subspace of the Jones vectors. The following table gives examples of Jones matrices for polarizers:
Phase retarders:
A phase retarder is an optical element that produces a phase difference between two orthogonal polarization components of a monochromatic polarized beam of light. Mathematically, using kets to represent Jones vectors, this means that the action of a phase retarder is to transform light with polarization |P⟩=c1|1⟩+c2|2⟩ to |P′⟩=c1eiη/2|1⟩+c2e−iη/2|2⟩ where |1⟩,|2⟩ are orthogonal polarization components (i.e. ⟨1|2⟩=0 ) that are determined by the physical nature of the phase retarder. In general, the orthogonal components could be any two basis vectors. For example, the action of the circular phase retarder is such that |1⟩=12(1−i)and|2⟩=12(1i) However, linear phase retarders, for which |1⟩,|2⟩ are linear polarizations, are more commonly encountered in discussion and in practice. In fact, sometimes the term "phase retarder" is used to refer specifically to linear phase retarders.
Phase retarders:
Linear phase retarders are usually made out of birefringent uniaxial crystals such as calcite, MgF2 or quartz. Plates made of these materials for this purpose are referred to as waveplates. Uniaxial crystals have one crystal axis that is different from the other two crystal axes (i.e., ni ≠ nj = nk). This unique axis is called the extraordinary axis and is also referred to as the optic axis. An optic axis can be the fast or the slow axis for the crystal depending on the crystal at hand. Light travels with a higher phase velocity along an axis that has the smallest refractive index and this axis is called the fast axis. Similarly, an axis which has the largest refractive index is called a slow axis since the phase velocity of light is the lowest along this axis. "Negative" uniaxial crystals (e.g., calcite CaCO3, sapphire Al2O3) have ne < no so for these crystals, the extraordinary axis (optic axis) is the fast axis, whereas for "positive" uniaxial crystals (e.g., quartz SiO2, magnesium fluoride MgF2, rutile TiO2), ne > no and thus the extraordinary axis (optic axis) is the slow axis. Other commercially available linear phase retarders exist and are used in more specialized applications. The Fresnel rhombs is one such alternative.
Phase retarders:
Any linear phase retarder with its fast axis defined as the x- or y-axis has zero off-diagonal terms and thus can be conveniently expressed as (eiϕx00eiϕy) where ϕx and ϕy are the phase offsets of the electric fields in x and y directions respectively. In the phase convention ϕ=kz−ωt , define the relative phase between the two waves as ϵ=ϕy−ϕx . Then a positive ϵ (i.e. ϕy > ϕx ) means that Ey doesn't attain the same value as Ex until a later time, i.e. Ex leads Ey . Similarly, if ϵ<0 , then Ey leads Ex For example, if the fast axis of a quarter waveplate is horizontal, then the phase velocity along the horizontal direction is ahead of the vertical direction i.e., Ex leads Ey . Thus, ϕx<ϕy which for a quarter waveplate yields ϕy=ϕx+π/2 In the opposite convention ϕ=ωt−kz , define the relative phase as ϵ=ϕx−ϕy . Then ϵ>0 means that Ey doesn't attain the same value as Ex until a later time, i.e. Ex leads Ey The Jones matrix for an arbitrary birefringent material is the most general form of a polarization transformation in the Jones calculus; it can represent any polarization transformation. To see this, one can show cos sin cos sin cos sin sin cos 2θ) cos sin cos sin sin sin sin cos sin sin sin sin sin cos sin cos sin cos (2θ)) The above matrix is a general parametrization for the elements of SU(2), using the convention SU (2)={(α−β¯βα¯):α,β∈C,|α|2+|β|2=1} where the overline denotes complex conjugation.
Phase retarders:
Finally, recognizing that the set of unitary transformations on C2 can be expressed as {eiγ(α−β¯βα¯):α,β∈C,|α|2+|β|2=1,γ∈[0,2π]} it becomes clear that the Jones matrix for an arbitrary birefringent material represents any unitary transformation, up to a phase factor eiγ . Therefore, for appropriate choice of η , θ , and ϕ , a transformation between any two Jones vectors can be found, up to a phase factor eiγ . However, in the Jones calculus, such phase factors do not change the represented polarization of a Jones vector, so are either considered arbitrary or imposed ad hoc to conform to a set convention.
Phase retarders:
The special expressions for the phase retarders can be obtained by taking suitable parameter values in the general expression for a birefringent material. In the general expression: The relative phase retardation induced between the fast axis and the slow axis is given by η=ϕy−ϕx θ is the orientation of the fast axis with respect to the x-axis.
ϕ is the circularity.Note that for linear retarders, ϕ = 0 and for circular retarders, ϕ = ± π /2, θ = π /4. In general for elliptical retarders, ϕ takes on values between - π /2 and π /2.
Axially rotated elements:
Assume an optical element has its optic axis perpendicular to the surface vector for the plane of incidence and is rotated about this surface vector by angle θ/2 (i.e., the principal plane through which the optic axis passes, makes angle θ/2 with respect to the plane of polarization of the electric field of the incident TE wave). Recall that a half-wave plate rotates polarization as twice the angle between incident polarization and optic axis (principal plane). Therefore, the Jones matrix for the rotated polarization state, M(θ), is M(θ)=R(−θ)MR(θ), where cos sin sin cos θ).
Axially rotated elements:
This agrees with the expression for a half-wave plate in the table above. These rotations are identical to beam unitary splitter transformation in optical physics given by R(θ)=(rt′tr′) where the primed and unprimed coefficients represent beams incident from opposite sides of the beam splitter. The reflected and transmitted components acquire a phase θr and θt, respectively. The requirements for a valid representation of the element are t' r' =±π and 0.
Axially rotated elements:
Both of these representations are unitary matrices fitting these requirements; and as such, are both valid.
Arbitrarily rotated elements:
This would involve a three-dimensional rotation matrix. See Russell A. Chipman and Garam Yun for work done on this. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kawashima procedure**
Kawashima procedure:
The Kawashima procedure is used for congenital heart disease with a single effective ventricle and an interrupted inferior vena cava (IVC). It was first performed in 1978 and reported in 1984.
Procedure:
Technically it is very similar to the bidirectional Glenn procedure used to direct half the body's venous blood flow into the lungs. However, in patients with interrupted IVC, most of the blood from the lower body actually joins the blood from the upper body before returning to the heart via the superior vena cava (SVC). Therefore, the redirection of SVC blood to the lungs (as in the Glenn) results in much more than half the venous blood flow being diverted.After Kawashima, the only de-oxygenated blood returning to the heart is from the abdominal organs (via the hepatic veins). As a result, there is much less hypoxia than after Glenn, and the heart is pumping less additional blood than after Glenn. However, the hypoxia can worsen over time (because of the development of microscopic AVMs in the lungs that allow blood to pass through without being oxygenated), and therefore these children still may need a complete Fontan procedure in the end. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet Storage Name Service**
Internet Storage Name Service:
In computing, the proposed Internet Storage Name Service (iSNS) protocol allows automated discovery, management and configuration of iSCSI and Fibre Channel devices (using iFCP gateways) on a TCP/IP network.
Features:
iSNS provides management services similar to those found in Fibre Channel networks, allowing a standard IP network to operate in much the same way that a Fibre Channel storage area network does. Because iSNS is able to emulate Fibre Channel fabric services and manage both iSCSI and Fibre Channel devices, an iSNS server can be used as a consolidated configuration point for an entire storage network. However, the use of iSNS is optional for iSCSI while it is required for iFCP. Additionally, an iSNS implementation is not required by the standard to provide support for both of these protocols.
Components:
The iSNS standard defines four components: The iSNS Protocol iSNSP is a protocol that specifies how iSNS clients and servers communicate. It is intended to be used by various platforms, including switches and targets as well as server hosts.iSNS Clients iSNS clients are part of iSNSP aware storage devices. iSNS clients initiate transactions with iSNS servers using the iSNSP, register device attribute information in a common Discovery Domain (DD), download information about other registered clients and receive asynchronous notification of events that occur in their DD(s).iSNS Servers iSNS servers respond to iSNS protocol queries and requests made by iSNS clients using the iSNSP. iSNS servers initiate iSNSP State Change Notifications and store properly authenticated information submitted by a registration request in an iSNS database.iSNS Databases iSNS databases are the information repositories for iSNS server(s). They maintain information about iSNS client attributes; while implementations will vary, a directory-enabled implementation of iSNS, for example, might store client attributes in an LDAP directory.
Services:
An iSNS implementation provides four primary services: Name registration and storage resource discovery Discovery domains and login control State-change notification Bidirectional mappings between Fibre Channel and iSCSI devices Name registration and storage resource discovery iSNS implementations allow all entities in a storage network to register and query an iSNS database. Both targets and initiators can register with the iSNS database, and each entity can inquire about other initiators and targets. For example, a client initiator can obtain information about target devices from an iSNS server.
Services:
Discovery domains and login control Administrators can use the discovery domains to divide storage nodes into manageable, non-exclusive groups. By grouping storage nodes, administrators are able to limit the login process of each host to the most appropriate subset of targets registered with the iSNS, which allows the storage network to scale by reducing the number of unnecessary logins and by limiting the amount of time each host spends establishing login relationships.
Services:
Each target is able to use login control to delegate their access control and authorization policies to an iSNS server. Such delegation is intended to promote centralized management.
State-change notification The state-change notification (SCN) service allows an iSNS server to issue notifications about each event that affects storage nodes on the managed network. Each iSNS client may register for notifications on behalf of its storage nodes, and each client is expected to respond according to its own requirements and implementation.
Services:
Bidirectional mappings between Fibre Channel and iSCSI devices Because the iSNS database stores naming and discovery information about both Fibre Channel and iSCSI devices, iSNS servers are able to store mappings of Fibre Channel devices to proxy iSCSI device images on the IP network. These mappings may also be made in the opposite direction, allowing iSNS servers to store mappings from iSCSI devices to proxy World Wide Names (WWNs). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Asahi Health**
Asahi Health:
Asahi (or Asahi Health) is a Finnish health exercise based on the eastern traditions of T'ai chi ch'uan, qigong, yiquan and yoga, with a western scientific viewpoint. Asahi is designed to suit everybody, regardless of physical condition or age and education level. Asahi exercise is taught and performed in instructed groups, but Asahi can also be performed alone as a form of daily self-treatment and build. Asahi exercise is ideal for short breaks. This exercise is equally effective in a group or alone as pleases.
The History of Asahi:
Asahi was created in Finland 2004 by professional sports instructors and martial artists Timo Klemola, Ilpo Jalamo, Keijo Mikkonen and Yrjö Mähönen. They all had high regards towards classical body development techniques such as karate, T'ai chi ch'uan, yiquan and yoga, but these styles, as rewarding as they are, seemed to attract only a small marginal of the Finnish population. These classical styles are quite complex and therefore may have a high starting level. They use concepts such as qi and prana, which may seem mystical to western people.
The History of Asahi:
The purpose of Asahi was to get the best out of these techniques, put it in the most simplified form, make it overall scientific and turn it into an easily approachable form - a health exercise for everybody with no starting level at all.Asahi is designed to treat and prevent shoulder- and back problems, fractures due to falling down and stress-related psychosomatic problems.
The Principles of Asahi:
Asahi is a series of slow movements, completed in silence. It looks harmonious and beautiful, a bit like qigong.
The Principles of Asahi:
The basic six principles of Asahi are: 1. The linking of movement and breath 2. Practicing vertically erect body alignment 3. Whole body movement 4. Listening to the slow motion 5. Cultivating the mind with mental images 6. The exercise as a continual, flowing experienceThe Asahi movements are soft and performed in the rhythm of breathing. The series is simple and easy to learn. The movements have also a practical function, for example picking up a ball from the floor or improving one’s balance by standing on one foot. Advanced levels are designed for long-term trainees, yet they are equally simple to learn.
Distribution:
Asahi can be practiced in major areas of Finland. Asahi Health Ltd has also been accepted as an Education Partner to Federation of International Sports, Aerobics and Fitness as the first Body Mind -product to be recognized and recommended by this organization. These exercises can be done by a teacher guiding a class, or through video instruction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of email clients**
Comparison of email clients:
The following tables compare general and technical features of notable email client programs.
General:
Basic general information about the clients: creator/company, O/S, licence, & interface. Clients listed on a light purple background are no longer in active development.
Release history:
A brief digest of the release histories.
Operating system support:
The operating systems on which the clients can run natively (without emulation).
Protocol support:
Communication and access protocol support What email and related protocols and standards are supported by each client.
Integration protocol support Authentication support SSL and TLS support
Features:
Information on what features each of the clients support.
General features For all of these clients, the concept of "HTML support" does not mean that they can process the full range of HTML that a web browser can handle. Almost all email readers limit HTML features, either for security reasons, or because of the nature of the interface. CSS and JavaScript can be especially problematic.
Messages features Database, folders and customization Templates, scripts and programming languages
Internationalization:
The Bat! supports Email Address Internationalization (EAI).As of October 2016, email clients supporting SMTPUTF8 included Outlook 2016, mail for iOS, and mail for Android. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transverse measure**
Transverse measure:
In mathematics, a measure on a real vector space is said to be transverse to a given set if it assigns measure zero to every translate of that set, while assigning finite and positive (i.e. non-zero) measure to some compact set.
Definition:
Let V be a real vector space together with a metric space structure with respect to which it is complete. A Borel measure μ is said to be transverse to a Borel-measurable subset S of V if there exists a compact subset K of V with 0 < μ(K) < +∞; and μ(v + S) = 0 for all v ∈ V, where v+S={v+s∈V|s∈S} is the translate of S by v.The first requirement ensures that, for example, the trivial measure is not considered to be a transverse measure.
Example:
As an example, take V to be the Euclidean plane R2 with its usual Euclidean norm/metric structure. Define a measure μ on R2 by setting μ(E) to be the one-dimensional Lebesgue measure of the intersection of E with the first coordinate axis: μ(E)=λ1({x∈R|(x,0)∈E⊆R2}).
Example:
An example of a compact set K with positive and finite μ-measure is K = B1(0), the closed unit ball about the origin, which has μ(K) = 2. Now take the set S to be the second coordinate axis. Any translate (v1, v2) + S of S will meet the first coordinate axis in precisely one point, (v1, 0). Since a single point has Lebesgue measure zero, μ((v1, v2) + S) = 0, and so μ is transverse to S. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DISC2**
DISC2:
In molecular biology, disrupted in schizophrenia 2 (non-protein coding), also known as DISC2, is a long non-coding RNA molecule. In humans, the DISC2 gene that produces the DISC2 RNA molecule is located on chromosome 1, at the breakpoint associated with the chromosomal translocation found in Schizophrenia. It is antisense to the DISC1 gene and may regulate the expression of DISC1. DISC2 may also contribute to other psychiatric disorders. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Renewable thermal energy**
Renewable thermal energy:
Renewable thermal energy is the technology of gathering thermal energy from a renewable energy source for immediate use or for storage in a thermal battery for later use.
Renewable thermal energy:
The most popular form of renewable thermal energy is the sun and the solar energy is harvested by solar collectors to heat water, buildings, pools and various processes. Another example of Renewable Thermal is a Geothermal or ground source Heat Pump (GHP) system, where thermal stored in the ground from the summer is extracted from the ground to heat a building in another season. This example system is "renewable" because the source of excess heat energy is a reliably recurring process that occurs each summer season.
History of Renewable Thermal Systems:
Solar energy has been in use for centuries for heating dwellings and to produce hot water before low cost natural gas was discovered. It gained attention during and after the oil embargo of 1973 as engineers investigated ways to produce thermal energy from a renewable source instead of fossil fuels.
History of Renewable Thermal Systems:
The history of utilizing the ground as a heat source is more recent and has gained prominence in recent years especially in rural areas where natural gas heating may not be available. The outer crust of the Earth is a Thermal Battery that maintains a median temperature which is the same as the average air temperature at that location. This "average ground temperature" is a combination in balance of solar gain from the sun, thermal gain from the core of the earth, and heat loss due to conduction, evaporation, and radiation. The graphic at the right shows a map of the "average ground temperature" at locations within the United States.
Types:
Solar-based Renewable Thermal Ground-based Renewable Thermal Season thermal energy storage
Policy by geography:
New York State The state of New York took a big step in September 2015 when it created a new office titled Director of Renewable Thermal. The NY Director of Renewable Thermal will oversee a team to help companies develop and implement renewable, low-carbon cooling and heating systems. NY State considers this initiative a critical component of NYSERDA's strategy to enable net-zero energy buildings, which produce the same amount of energy as they consume. It also will further advance New York's progress toward creating self-sustaining energy markets for clean, renewable technologies.Renewable Thermal has been a core resource in many states Renewable Portfolio Standards. The report says: "State Renewable Portfolio Standard (RPS) programs have historically focused on electricity generation. However, some states have started incorporating renewable thermal power for heat generation into their RPS as a way to support the development and market growth of solar thermal, biomass thermal, geothermal, and other renewable thermal technologies." The plan focuses on "Renewable thermal energy has many of the same benefits as other renewable technologies, including improved air quality, economic development and job creation, and the promotion of regional energy security." An industry public described on-site combustion as :responsible for 35 percent of fossil fuel greenhouse gas emissions in New York State. " | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accident on the Rampe de Laffrey**
Accident on the Rampe de Laffrey:
The rampe de Laffrey (sometimes called the descente de Laffrey or the côte de Laffrey) is a section of France's Route nationale 85, itself part of the route Napoléon. It connects the communes of Laffrey and Vizille in the department of Isère, about 15 kilometers southeast of Grenoble.
Accident on the Rampe de Laffrey:
This steep and mostly relatively straight section of road ends in a sharp turn, and it is known for the high number of fatal accidents that have occurred in this final curve. Four of them, in 1946, 1973, 1975, and 2007, involved buses of pilgrims returning from Notre Dame de la Salette, and are among the deadliest in French history.
Design of the ramp:
The slope begins in the center of the village of Laffrey at a height of 910 meters, right on the edge of the Matheysine plateau. It descends over 600 metres along the mountainside, passing through the territories of Saint-Pierre-de-Mésage and Notre-Dame-de-Mésage. It then veers sharply to the right at the bridge over the Romanche and entering the town of Vizille, at a height of 300 meters. The road is only slightly sinuous, but at its beginning contains many broad turns while remaining relatively stable near its end. Its steep slope is its most notable feature, averaging 12% along its lower portion and 16 to 18% in some places; it finishes with a 110° turn before the bridge on the Romanche it. Because of its unusual design, Berliet used it until 1970 for testing its trucks. It was also used for motorcycle races beginning in 1960, and at one time was considered a possibility for an Alpine stage of the Tour de France.
Accidents:
Many accidents have taken place along this stretch of road; at least 150 deaths have been recorded at the site between 1946 and 2007, primarily of pilgrims returning from Our Lady of La Salette. It is the deadliest roadway section anywhere in France.
In 1946, a bus transporting pilgrims from Our Lady of La Salette crashed into a ravine, killing 18; today a memorial to the dead stands along the roadside near Saint-Pierre-de-Mésage.
In 1956, a Dutch bus suffered the same fate at the same place; seven were killed.
In 1968, a truck flew off the road, killing its two occupants.
In 1970, another bus transporting pilgrims flew over several walls before coming to a stop; five passengers, from Nord, were killed. The cause of this accident was later determined to be excessive speed.
Another bus full of pilgrims, returning again from Our Lady of La Salette, crashed near the base of the road in 1973; 43 Belgians were killed.
In 1974, a truck without brakes hit a car, killing four.
Another bus returning from Our Lady of La Salette crashed near the base of the road in 1975, in the same location as the previous bus; 29 were killed.
In 2007, yet another bus full of pilgrims, this time from Poland, crashed on the road, killing 26.
Accidents:
1946 French bus accident The 1946 accident was the first in a string of fatal accidents along this stretch of road, which has been claimed as one of the deadliest in France. 18 people were killed when a bus lost the use of its brakes, flying off the road and into a ravine along the Romanche; the bus was transporting pilgrims from Beaujolais on a return journey from Our Lady of La Salette, where they had been celebrating the Marian Year. A memorial to the dead was later erected near the town of Saint-Pierre-de-Mésage, close to the accident site.
Accidents:
1973 Belgian bus accident The July 18, 1973 accident, remains, as of 2007, the worst to ever have occurred along that stretch of roadway. A bus was carrying Belgian pilgrims from Braine-le-Comte returning from a visit to the shrine of Our Lady of La Salette; it missed a curve at the base of the road, near its intersection with the bridge over the Romanche, and overturned. Forty-three people were killed and six injured in the resulting crash. After the crash the mayor of Laffrey condemned the route as being particularly dangerous, as it had already claimed over one hundred lives over the previous quarter-century.The accident is sometimes referred to as the accident de Vizille because it occurred very close to the entrance of the town of that name; the crash site, however, is actually located within the boundaries of the commune of Notre-Dame-de-Mésage. Today a memorial to the victims stands at the site of the accident; it claims the number of dead as forty-five.
Accidents:
1975 French bus accident In an April 2, 1975 accident, 29 pilgrims from Sully-sur-Loire in Loiret were killed when their bus lost its brakes at the bottom of the road, causing it to fly over a ravine at a speed estimated at 120 kilometers an hour; it then crashed into a garden and overturned. As a result of this crash, more careful regulation of electronic braking systems was instituted across France, as were annual inspections of heavy vehicles.
Accidents:
2007 Polish bus accident In circumstances similar to those of the two previous accidents, on July 22, 2007, at 9:30 am, a Polish bus carrying fifty people apparently lost the use of its brakes at the base of the hill. It missed the final curve of the descent and overturned into a ravine near the Romanche, where it immediately burst into flames. The accident occurred at almost the same spot at which a Belgian bus suffered the same fate in 1973, killing 43. Provisional figures as of August 9 put the death toll at 26, with 24 injured, 9 of them seriously, and 3 in intensive care. The accident provoked an outpouring of public support both in France and in Poland. French Prime Minister François Fillon and Jean-Louis Borloo, then the Minister of Ecology and in charge of transport, immediately went to the scene of the accident. Polish president Lech Kaczyński quickly came to Grenoble, where he was met by French president Nicolas Sarkozy; together, the two men visited the bedsides of several victims who had been transported to various local hospitals.
Accidents:
The bus, loaded with pilgrims from Szczecin, Świnoujście, Warsaw and Stargard Szczeciński, had left Poland on July 10 for the start of a tour of Marian sanctuaries in southern Europe. It had made stops at Our Lady of Fatima in Portugal and at Our Lady of Lourdes, and finished at the sanctuary of Our Lady of La Salette. It crashed on its return trip to Poland; 47 pilgrims, a chaperone, and two drivers, all Polish, were on board.
Accidents:
Inquiry An inquiry immediately established that the bus should never have been driving along the Rampe in the first place, as use of the road is severely restricted, and forbidden to heavy vehicles without local authorization. This authorization was rarely given, and only for specially equipped vehicles on local transport routes. Eyewitness testimony from survivors indicated that the driver, who, at 22, was the younger of the two assigned to the tour and had had his driver's license for only 10 months, had voluntarily ignored the itinerary. He had chosen instead to follow a shorter route indicated on his GPS, in the process passing no fewer than 14 signs indicating that passage on the road was forbidden to heavy vehicles. Both the other driver and the chaperone were severely injured but survived the crash.
Accidents:
The bus involved in the accident was a Scania that had entered into circulation in July 2000. According to the Polish operator of the tour, it had passed a technical inspection in Germany three weeks prior to the accident without question. It is not known if the bus was equipped with a speed-reduction system; similar recently built vehicles have been designed with a backup system, either electromagnetic or hydraulic, in addition to the usual brakes.
Accidents:
Motorists who were following the bus during its descent indicated that the brake lights appeared to be working normally. Other drivers, however, indicated that they had seen sparks coming from the undercarriage, suggesting that the brakes may have indeed failed. In addition, a survivor said in her testimony that the driver had warned passengers during the descent, crying out that the brakes had gone; she also said that just before this she had heard something crack under the bus.
Accidents:
Aftermath At a press conference on July 25, Prime Minister Fillon announced that additional restrictions along the roadway would immediately be planned. Flashing signs were to be installed, as were speed bumps as high as the former signs at the location. At the end of September special gantries were also to have been placed, preventing the entry of vehicles over a certain height. A barrier was also planned, containing a magnetic card designed to recognize only certain service vehicles that are authorized to use the road. By January, 2008, no gantries had been placed at the site, and only a few of the signs were ready for use. Furthermore, there was a good deal of evidence that unauthorized vehicles still used the road.The gantries were inaugurated in July 2008.On October 8, 2007, President Kaczynski presented a special decoration to 32 people who had participated in rescue efforts after the accident. The ceremony took place at the Polish embassy in Paris.
Accidents:
Regulations and signage up to July 2007 After a pair of accidents in the 1970s, the route was heavily reworked to make it safer for light vehicles and load-bearing vehicles, but modifications to safely accommodate buses were considered too expensive and difficult. The road was widened, and several sections near the summit were expanded to three lanes. Vehicles over eight tons and buses, except those serving regular local routes, were banned from using the road without specific authorization from the local prefect. Those local and regular services are allowed only on specially designed vehicles with speed-reducers. Buses and trucks coming from the regular route are requested to exit at La Mure and to take secondary road 529 past the massif of Conest towards Grenoble. Many violations of this rule have been noted, though. To discourage violators, a sign with a skull with flickering eyes was formerly installed at the top of the road; however, it was soon removed after being considered in poor taste and politically incorrect.
Accidents:
Changes after the accident of July 22, 2007 On July 25, 2007, as a result of the most recent accident on the ramp, French Prime Minister François Fillon held a press conference to announce a series of measures to prevent such a heavy vehicle from attempting the descent again. Flashing signs were installed within days, as were speed bumps at the level of the road signs, designed to ensure driver attention to these signs. In July 2008, height gauges were also set up to physically prevent access for vehicles over a certain height. Authorised vehicles, such as local buses equipped with an improved braking system, are issued with a magnetic card allowing them to bypass the height gauge.
Appearances in Tour de France:
The section was first included in the Tour de France in 1951 and has since featured 8 times, most recently in 2010. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Object Manager (Windows)**
Object Manager (Windows):
Object Manager (internally called Ob) is a subsystem implemented as part of the Windows Executive which manages Windows resources. Resources, which are surfaced as logical objects, each reside in a namespace for categorization. Resources can be physical devices, files or folders on volumes, Registry entries or even running processes. All objects representing resources have an Object Type property and other metadata about the resource. Object Manager is a shared resource, and all subsystems that deal with the resources have to pass through the Object Manager.
Architecture:
Object Manager is the centralized resource broker in the Windows NT line of operating systems, which keeps track of the resources allocated to processes. It is resource-agnostic and can manage any type of resource, including device and file handles. All resources are represented as objects, each belonging to a logical namespace for categorization and having a type that represents the type of the resource, which exposes the capabilities and functionalities via properties. An object is kept available until all processes are done with it; Object Manager maintains the record of which objects are currently in use via reference counting, as well as the ownership information. Any system call that changes the state of resource allocation to processes goes via the Object Manager.
Architecture:
Objects can either be Kernel objects or Executive objects. Kernel objects represent primitive resources such as physical devices, or services such as synchronization, which are required to implement any other type of OS service. Kernel objects are not exposed to user mode code, but are restricted to kernel code. Applications and services running outside the kernel use Executive objects, which are exposed by the Windows Executive, along with its components such as the memory manager, scheduler and I/O subsystem. Executive objects encapsulate one or more kernel objects and expose not only the kernel and kernel-mediated resources, but also an expanded set of services that the kernel does. Applications themselves can wrap one or more Executive objects and surface objects that offer certain services. Executive objects are also used by the environment subsystems (such as the Win32 subsystem, the OS/2 subsystem, the POSIX subsystem, etc.) to implement the functionality of the respective environments.
Architecture:
Whenever an object is created or opened, a reference to the instance, called a handle, is created. Object Manager indexes the objects both by their names as well as the handles. But, referencing the objects by the handles is faster because the name translation can be skipped. Handles are associated with processes (by making an entry into the process' Handle table that lists the handles it owns) and can be transferred between processes as well. A process must own a handle to an object before using it. A process can own a maximum of 16,000,000 handles at one time. During creation, a process gains handles to a default set of objects. While there exists different types of handles - file handles, event handles and process handles - they only help in identifying the type of the target objects; not in distinguishing the operations that can be performed through them, thus providing consistency to how various object types are handled programmatically. Handle creation and resolution of objects from handles are solely mediated by Object Manager, so no resource usage goes unnoticed by it.
Architecture:
The types of Executive objects exposed by Windows NT are: Object structure Each object managed by the Object Manager has a header and a body; the header contains state information used by Object Manager, whereas the body contains the object-specific data and the services it exposes. An object header contains certain data, exposed as Properties, such as Object Name (which identifies the object), Object Directory (the category the object belongs to), Security Descriptors (the access rights for an object), Quota Charges (the resource usage information for the object), Open handle count (the number of times a handle, an identifier to the object, has been opened), Open handle list (the list of processes which has a live reference to the object), its Reference count (the number of live references to the object), and the Type (an object that identifies the structure of the object body) of the object.
Architecture:
A Type object contains properties unique to the type of the object as well as static methods that implement the services offered by the object. Objects managed by Object Manager must at least provide a predefined set of services: Close (which closes a handle to an object), Duplicate (create another handle to the object with which another process can gain shared access to the object), Query object (gather information about its attributes and properties), Query security (get the security descriptor of the object), Set security (change the security access), and Wait (to synchronize with one or more objects via certain events). Type objects also have some common attributes, including the type name, whether they are to be allocated in non-paged memory, access rights, and synchronization information. All instances of the same type share the same type object, and the type object is instantiated only once. A new object type can be created by endowing an object with Properties to expose its state and methods to expose the services it offers.
Architecture:
Object name is used to give a descriptive identity to an object, to aid in object lookup. Object Manager maintains the list of names already assigned to objects being managed, and maps the names to the instances. Since most object accesses occur via handles, it is not always necessary to look up the name to resolve into the object reference. Lookup is only performed when an object is created (to make sure the new object has a unique name), or a process accesses an object by its name explicitly. Object directories are used to categorize them according to the types. Predefined directories include \?? (device names), \BaseNamedObjects (Mutexes, events, semaphores, waitable timers, and section objects), \Callback (callback functions), \Device, \Driver, \FileSystem, \KnownDlls, \Nls (language tables), \ObjectTypes (type objects), \RPC Control (RPC ports), \Security (security subsystem objects), and \Windows (windowing subsystem objects). Objects also belong to a Namespace. Each user session is assigned a different namespace. Objects shared between all sessions are in the GLOBAL namespace, and session-specific objects are in the specific session namespaces OBJECT_ATTRIBUTES structure: The Attributes member can be zero, or a combination of the following flags: OBJ_INHERIT OBJ_PERMANENT OBJ_EXCLUSIVE OBJ_CASE_INSENSITIVE OBJ_OPENIF OBJ_OPENLINK OBJ_KERNEL_HANDLE
Usage:
Object Manager paths are available to many Windows API file functions, although Win32 names like \\?\ and \\.\ for the local namespaces suffice for most uses. Using the former in Win32 user-mode functions translates directly to \??, but using \?? is still different as this NT form does not turn off pathname expansion.Tools that serve as explorers in the Object Manager namespaces are available. These include the 32-bit WinObj from Sysinternals and the 64-bit WinObjEx64. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zygomaticofacial nerve**
Zygomaticofacial nerve:
The zygomaticofacial nerve (or zygomaticofacial branch of zygomatic nerve or malar branch of zygomatic nerve) is a cutaneous (sensory) branch of the maxillary nerve (CN V2) that arises within the orbit. The zygomaticofacial nerve penetrates the inferolateral angle of the orbit, emerging into the face through: 631 the zygomaticofacial foramen,: 615 then penetrates the orbicularis oculi muscle to reach: 631 and innervate the skin of the prominence of the cheek.: 631
Anatomy:
Communications The zygomaticofacial nerve forms a nerve plexus with the zygomatic branches of facial nerve (CN VII), and the inferior palpebral branches of maxillary nerve (V2).: 631 Variation The nerve may sometimes be absent.: 631 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Harp trap**
Harp trap:
A harp trap is a device used to capture bats without exposing them to disentangling from traps like mist nets and hand nets. It capitalizes on bats' flight characteristic of turning perpendicular to the ground to pass between obstacles, in this case the trap's strings, in which flight attitude they cannot maintain their angle of flight and drop unharmed into a collection chamber. Invented in 1958 by US Public Health Service veterinarian Denny Constantine, the harp trap has been modified for different applications and efficiencies by users, including Merlin Tuttle's double harp trap in 1974, Charles Francis' 4-frame harp trap in 1989, and other modifications improving collapsibility and portability.The harp trap is a significant tool for measuring aspects of bat ecology, most notably to obtain information about bat populations and movement for public health and conservation management purposes. Even though visually apparent when set out in the open, harp traps are effective if placed where natural features funnel bats toward the trap. They can be set across flyways in heavily wooded areas, over small bodies of water, and at roost entrances, and can be left unattended for periods of time, allowing multiple sites to be worked simultaneously. They can be more efficient for surveying bats than mist nets, capturing higher numbers of species and individuals. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Procyanidin dimer**
Procyanidin dimer:
Proanthocyanidin dimers are a specific type of proanthocyanidin, which are a class of flavanoids. They are oligomers of flavan-3-ols.
Dimeric B-type proanthocyanidins Dimeric A-type proanthocyanidins | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Loveline**
Loveline:
Loveline is a syndicated radio call-in program in North America, offering medical and relationship advice to listeners, often with the assistance of guests, typically actors and musicians. Its host through most of its run was Dr. Drew Pinsky who was paired with a radio personality.
Loveline was broadcast live, Sundays through Thursdays at 10pm–midnight PT (Mondays through Fridays at 1am–3am ET). Its flagship station was KROQ-FM in Los Angeles. Syndication was usually on rock, alternative, and adult talk radio stations. Loveline can also be heard online through the websites of affiliate stations.
The radio show was discontinued in April 2016. After a hiatus, the show was rebooted as a podcast with Amber Rose and clinical psychologist and sex therapist, Chris Donaghue, as hosts. The renewed podcast ran from September 8, 2016, until March 17, 2018.
Loveline:
On November 1, 2018, Loveline was revived on LGBTQ network Channel Q with Dr. Chris Donaghue as the solo host, heard Monday through Thursday from 7 to 9 p.m. Eastern Time. Channel Q is owned by Audacy, Inc. and heard on the company's namesake streaming service and on the HD Radio subchannels of about 20 FM stations in New York City, Los Angeles, Chicago and other large media markets.
Loveline:
During its peak of popularity, Loveline also doubled as a weekly live audience television program on MTV, also called Loveline. It was presented by Pinsky, Adam Carolla and a third co-host.
History:
Loveline began in 1983 as a Sunday night dating and relationships segment on Los Angeles radio station KROQ-FM, hosted by DJ Jim "Poorman" Trenton, DJ "Swedish" Egil Aalvik, and Scott Mason.
History:
In 1984, Trenton added a segment called "Ask a Surgeon," hosted by his friend Drew Pinsky, who at the time, was a fourth-year medical student at the University of Southern California. The medical segment was pre-dated by an occasional legal segment in which a lawyer, known as "Lawyer Lee" would be present to answer legal questions. As Loveline developed and increased its audience, Pinsky became a public figure in his own right, and the show began referring to him informally as "Dr. Drew".
History:
After a traumatic break up, Mason announced that he would no longer be hosting the show. After they stopped doing the "Lawyer Lee" segment and "Swedish" Egil left the show, Trenton continued co-hosting the show with Dr. Drew. In February 1992, the show expanded from Sunday nights to five nights a week, Sunday through Thursday. In August 1993 Trenton was replaced by former MTV VJ Riki Rachtman.
History:
Pinsky and Rachtman were joined by Adam Carolla in October 1995, as the show was first being syndicated nationally. The trio hosted together for several months, but Carolla and Rachtman often competed for airtime, leading Rachtman to resign in January 1996. Carolla and Pinsky would go on to host the show together until Carolla's departure in November 2005.
History:
The popularity and reach of Loveline increased dramatically in the ten years during which it was hosted by Pinsky and Carolla. The two had a natural chemistry, in which Carolla's jocular tone emphasized Pinsky's reasoned expertise. Together, they refined the format of the show, and capitalized on their growing popularity with speaking tours, a television show on MTV from 1996–2000 (also titled Loveline), a book, and cameo appearances on television series and movies. In November 2005, Carolla left Loveline to prepare for a new morning radio show, The Adam Carolla Show, which began airing in January 2006.
History:
After Carolla's departure, he was substituted on a temporary basis by numerous celebrity guests, some of whom announced their desire to take the job permanently. During his first appearance on Carolla's new morning show, Pinsky revealed that the shortlist of candidates included Carson Daly, Joel McHale, Danny Bonaduce, Steve-O and Daniel Tosh. On July 23, 2006, KROQ-FM disc jockey Stryker was hired as Pinsky's co-host.
History:
On April 22, 2009, Stryker announced that due to financial cutbacks at Westwood One, he would be leaving the show and it would be his last appearance that night. After Stryker's departure, a number of celebrities guest co-hosted opposite Drew. On March 11, 2010, it was announced that Mike "Psycho Mike" Catherwood from The Kevin and Bean Show would co-host Loveline with Dr. Drew.
History:
After a long stint as a guest host, Simone Bienne was formally brought on as a co-host in December 2011. This followed Westwood One's merger with Dial Global. She was introduced to the show by Dr. Drew through Lifechangers, and is the first female co-host of the radio show. As of November 2012 she is no longer a host.
History:
On December 7, 2012, Adam Carolla rejoined Dr. Drew for a Loveline-style "Reunion Tour" of the US to promote their new podcast, The Adam & Dr. Drew Show.On January 5, 2015, Catherwood and Pinsky launched a new program, Dr. Drew Midday Live with Mike Catherwood on KABC in Los Angeles.On March 16, 2016, Catherwood announced that he would be leaving the show to focus more on raising his daughter. His final episode was March 31, 2016. A month later, on April 21, Dr. Drew announced Loveline would wrap up the following week, after the April 28 episode. Adam Carolla re-joined him as co-host for the final show.
History:
On September 8, 2016, the show was rebooted as a weekly podcast, with Amber Rose and Dr. Chris Donaghue serving as hosts. Ann Ingold was named producer. The final episode of the podcast was released on March 8, 2018.
On November 1, 2018, Loveline was once again rebooted, this time on LGBTQ+ formatted talk radio network Channel Q. It is hosted by Dr. Chris Donaghue, and airs Monday through Thursday from 7PM to 9PM (Pacific).
Format:
Loveline follows the call-in question-and-answer model with the primary goal of helping youth and young adults with relationship, sexuality, and drug addiction problems through the expertise of Pinsky, an internist and addiction medicine specialist, and the humorous context and insight provided by a comedic host. Adam Carolla explained his role as a "sheep in wolf's clothing". Furthermore, the comedy is often necessary to keep spirits high, as the show frequently handles callers who are dealing with serious issues such as drug addiction, sexual abuse, and domestic violence.
Format:
The show will occasionally answer calls of a general medical nature, especially on slow nights or if they seem peculiar. Also, listeners are encouraged to participate in Loveline's many games.
Personalities:
Regular hosts Drew Pinsky (December 1984 – April 28, 2016) Jim "The Poorman" Trenton (1983 – August 1993) "Swedish" Egil Aalvik (1983–1990) Scott Mason (1983–1987) Attorney Lee "Harvey" Alpert (1986–1989) Riki Rachtman (August 1993 – January 17, 1996) Adam Carolla (October 1995 – November 3, 2005) Stryker (July 23, 2006 – April 22, 2009) Michael Catherwood (March 21, 2010 – March 31, 2016) Simone Bienne (December 6, 2011 – November 11, 2012) Amber Rose (September 8, 2016 – March 17, 2018) Chris Donaghue (September 8, 2016 – March 17, 2018) Recurring fill-ins For Pinsky (in the case of medical physicians) or Psycho Mike (in the case of usual comedic co-host) Dr. Gary Alter: "Dr. Alter" ("Dr. Whack 'n' Sack, Dr. Alter-men") Nicole Alvarez: DJ on KROQ-FM David Alan Grier: a popular and frequent guest, sometimes referred to as the "Third Host" of Loveline or DAG.
Personalities:
Dr. Ohad Ben-Yehuda: "Dr. Ben" (OB/GYN, Infertility, High Risk Obstetrics) Dr. Marcel Daniels: "Dr. Marcel" Dr. Bruce Heischober: "Dr. Bruce" (Ichabod Bruce, Dr. Spaz) Dr. Bruce Hensel: "Dr. Bruce" Dr. Reef Karim: "Dr. Reef" Dr. Robert Rey: "Dr. 90210", a plastic surgeon from Beverly Hills Trina Dolenz: former host of VH1's Tool Academy and couples therapist Emily Morse: "Sex with Emily" Producers Ann Wilkins-Ingold (1988 – April 28, 2016) Lauren (Junior Producer) (2002 – December 20, 2007) Radio engineers The show has had many engineers throughout the years who have developed their own on-air presence. Whether it be conversations with hosts and guests or specific "radio drops" that they have produced usually from clips of previous shows.
Personalities:
Mike Dooley (October 1995 – June 20, 1999) ("Dooley," "The One-Nut Wonder," produced "The Drew Shuffle" and "The Drew Boogie") Anderson Cowan (June 21, 1999 – April 28, 2016) ("The Magic Fingered One," "The Liberace of the Potentiometers," produced "Millionaire", PAB, Co-Host of "The After Disaster") Damion Stephens (2000–2002) Chris Perez (2003–2005) Michelle (2004 – November 2005) (left for The Adam Carolla Show)Timeline
Media tie-ins and cultural influence:
A TV version of Loveline, also called Loveline, ran on MTV from 1996 to 2000; it was produced by Stone Stanley Entertainment. It followed the same general format as the radio program but featured a live audience and a female co-host alongside Pinsky and Carolla. The female co-host role was filled over the course of the series by MTV VJ Idalis, actresses Kris McGaha, Catherine McCord, Diane Farr and comedian Laura Kightlinger. Loveline TV was filmed at Hollywood Center Studios.The Dr. Drew and Adam Book: A Survival Guide to Life and Love, an advice book written in a tone similar to the radio show, was released in 1998.
Media tie-ins and cultural influence:
The series has also spawned a number of Loveline-inspired games that have been mentioned on the show.A thinly-veiled reference to Loveline can be seen in the 1988 film Heathers in a scene featuring a radio call-in advice program called Hot Probs hosted by none other than Jim Trenton, the then-host of Loveline. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Heterophony**
Heterophony:
In music, heterophony is a type of texture characterized by the simultaneous variation of a single melodic line. Such a texture can be regarded as a kind of complex monophony in which there is only one basic melody, but realized at the same time in multiple voices, each of which plays the melody differently, either in a different rhythm or tempo, or with various embellishments and elaborations. The term was initially introduced into systematic musicology to denote a subcategory of polyphonic music, though is now regarded as a textural category in its own right.
Characteristics:
Heterophony is often a characteristic feature of non-Western traditional musics—for example Ottoman classical music, Arabic classical music, Japanese Gagaku, the gamelan music of Indonesia, kulintang ensembles of the Philippines, and the traditional music of Thailand. In European traditions, there are also some examples of heterophony. One such example is dissonant heterophony of Dinaric Ganga or "Ojkavica" traditions from southern Bosnia, Croatia and Montenegro that is attributed to ancient Illyrian tradition. Another remarkably vigorous European tradition of heterophonic music exists, in the form of Outer Hebridean Gaelic psalmody.David Morton describes the texture in Thai music: Thai music is nonharmonic, melodic, or linear, and as is the case with all musics of this genre, its fundamental organization is horizontal...
Characteristics:
Thai music in its horizontal complex is made up of a main melody played simultaneously with variants of it which progress in relatively slower and faster rhythmic units... Individual lines of melody and variants sound in unison or octaves only at specific structural points, and the simultaneity of different pitches does not follow the Western system of organized chord progressions. Between the structural points where the pitches coincide (unison or octaves) each individual line follows the style idiomatic for the instrument playing it. The vertical complex at any given intermediary point follows no set progression; the linear adherence to style regulates. Thus several pitches that often create a highly complex simultaneous structure may occur at any point between the structural pitches. The music "breathes" by contracting to one pitch, then expanding to a wide variety of pitches, then contracting again to another structural pitch, and so on throughout. Though these complexes of pitches between structural points may strike the Western listener as arbitrary and inconsequential, the individual lines are highly consequential and logical linearly. The pattern of pitches occurring at these structural points is the basis of the modal aspect of Thai music.
Characteristics:
He goes on to suggest the term polyphonic stratification, rather than heterophony: The technique of combining simultaneously one main melody and its variants is often incorrectly described as heterophony: polyphonic stratification seems a more precise description, since each of the 'layers' is not just a close approximation of the main melody, but also has distinct characteristics and a style of its own
Examples:
Heterophony is somewhat rare in Western Classical music prior to the twentieth century. There are examples to be found in some works of J.S. Bach: as well as Mozart: and Mahler: In the 20th century, Benjamin Britten used heterophony to great effect in many pieces, including parts of the War Requiem and especially in the instrumental interludes of his three church parables: Curlew River, The Burning Fiery Furnace and The Prodigal Son. Peter Evans explains it as follows: "So unexpectedly stark were the sounds Britten drew from this group, and in particular so little dependent of his familiar harmonic propulsion, that listeners were ready to trace direct exotic influences in many features of the score." Other examples include Pierre Boulez's Rituel, Répons, and …explosante-fixe….Heterophony is a key element in the music of Canadian composer Jose Evangelista. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Game Boy Printer**
Game Boy Printer:
The Game Boy Printer, known as the Pocket Printer in Japan, is a thermal printer accessory released by Nintendo in 1998 which ceased production in early 2003. The Game Boy Printer is compatible with all the Game Boy systems except the Game Boy Micro and is designed to be used in conjunction with the Game Boy Camera. It also prints images from compatible late-generation Game Boy and Game Boy Color games (listed below). It runs on six AA batteries and uses a proprietary 38mm wide thermal paper with adhesive backing, originally sold in white, red, yellow and blue colors. In Japan, a bright yellow Pokémon version of the Game Boy Printer was released, featuring a feed button in the style of a Poké Ball.
Game Boy Printer Thermal Paper:
Released alongside the Game Boy Printer in 1998, Nintendo-manufactured thermal paper refill rolls were produced in white, cream, blue, yellow, and red colour variants, all of which had an integrated adhesive backing. They had a roll width of 38mm and a roll diameter of 30mm, with a central 12mm diameter red cardboard spindle. A typical roll had 390–400 cm of length. After powering the printer on, a clip at the rear of the protruding translucent grey refill housing is depressed, allowing this to be lifted away. The thermal paper roll inserted upside-down, unravelled end facing down, with this end being slotted into a thin slot. The maroon 'FEED' button is then pressed/held down, which engages the uptake motor, and pulls the paper through to the exit slot adjacent to the printer logo. This has an integrated serrator, which allows finished prints to be ripped in a zig-zag fashion off the main paper feed. Forcibly pulling the paper opposite to the feed direction causes permanent damage to the gearing within the feed mechanism.
Game Boy Printer Thermal Paper:
When a picture printed from the Game Boy Camera, it would print with a 5mm margin above and below the picture and print the picture at a 23mm height. This would give the total of 33mm height per picture. Although on-box refill advertisements boasted up to 180 pictures per roll, in actuality a typical roll could only print between 118 and 121 pictures.
Game Boy Printer Thermal Paper:
Sold on the official Nintendo e-Shop (as triple packs of blue, cream and white rolls) until 2007, Game Boy branded official replacement thermal paper is now difficult to source. Even brand-new, sealed, un-opened official rolls degrade relatively quickly once opened (if they were stored correctly and their seal has not failed). Most, however, have suffered degradation whilst in storage due to a chemical reaction between the thermal paper and adhesive backing layer. Due to the proprietary nature of the adhesive-backing, replacement thermal paper that is able to be adhered to surfaces once printed upon (including brands such as 'MAXStick') is prohibitively expensive.
Game Boy Printer Thermal Paper:
Instead, the thermal paper rolls can be successfully substituted with a 38mm x 4m alternative, with or without ('core-less') spindle cores, without repercussions on the printer. Such rolls are also compatible with some hand-held printing calculators, such as the Canon TP-8, Texas Instruments 5000–2008, Sharp 8180, and Casio FX-802. Alternatively, wider rolls (such as 57mm x 30mm x 12.7mm) can be cut or trimmed to 38mm, and function without issue.
Game Boy Printer Thermal Paper:
Please note, however, that due to the inherent limitations of thermal paper, photographs printed on thermal paper will fade over time (this depends heavily on the thermal paper variant used, and could be as short as a few months, or instead a few years) until the paper is virtually blank. Paper in this state can usually be re-used, as long as the length of the strip is long enough to be manually fed into the takeup.
Game Boy Printer Thermal Paper:
It is unknown whether original Game Boy Printer paper contains the chemicals Bisphenol-A (BPA) or its analog Bisphenol-S (BPS). Previously very widely used in plastics and thermal receipt paper due to their heat resistance and stability, these are currently being phased out of thermal paper coatings due to their in-vivo accrual (via direct dermal absorption) and resultant oestrogen-mimicking and endocrine disruption. Modern thermal paper roll replacements, or their manufacturers, usually clearly state if they are Bisphenol free [BP-Free].
Game Boy Printer Protocol:
The communication between the Game Boy and the Game Boy Printer is via a simple serial link. Serial clock (provided by the Game Boy for the printer), serial data output (from Game Boy to printer) as well as serial data input (to Game Boy from printer). The Game Boy sends a packet to the printer, to which the printer responds with an acknowledgement as well as a status code.
Game Boy Printer Protocol:
Packet Format Communication is via the Game Boy sending to the printer a simple packet structure as shown below. In general, between the first "sync_word" til the checksum is the Game Boy communicating to the printer. The last two bytes of the packet are for the printer to acknowledge and show its current status code.
Command may be either Initialize (0x01), Data (0x04), Print (0x02), or Inquiry (0x0F).
Payload byte count size depends on the value of the `DATA_LENGTH` field.
Compression field is a compression indicator. No compression = 0x00 Checksum is a simple sum of bytes in command, data length, and the data payload.
Status byte is a bit-field byte indicating various status of the printer itself. (e.g. If it is still printing) Commands Initialize (0x01) Typical Payload Size = 0This packet is sent without a data payload. It signals to the printer to clear the settings and prepare for the first data payload.
Data (0x04) Typical Payload Size = 640The data packet is for transferring the image data to the printer data buffer. The typical size of the data payload is 640 bytes since it can store two printable rows of 20 standard Game Boy tile (2 bit color in 8x8 pixels grid), of which the Game Boy tile takes 16 bytes.
Print (0x02) Typical Payload Size = 4This commands the printer to start printing. It also has 4 settings bytes for printing.
Inquiry (0x0F) Typical Payload Size = 0Used for checking the printer status byte. This may be for checking if there is enough data in the printer buffer to start printing smoothly or if the printer is currently printing.
Printer Status Reply Byte
Usage today:
Mad Catz and Xchanger sold a kit that enabled users to connect a Game Boy to a PC and print images using the PC's printer. Hobbyists outside the UK can also make their own cable for uploading images to their computer. A Game Boy Printer emulator is needed for the Game Boy to interface with the PC once linked via cable. The Game Boy Printer Paper has also been discontinued, and rolls of the genuine article that still produce a reliable image are becoming more difficult to find. Regular thermal paper, such as the kind used for POS terminals, can be cut to the proper width and used successfully with the Game Boy Printer.The system will print a test message reading "Hello" if it is turned on while the feed button is held. According to the manual, this is used to test if the printer is functioning properly. To get around using six AA batteries (1.5 volts each) for the printer, a single 9V battery can be used if wired properly, because the printer requires 9V DC.
Further Information:
Reverse Engineering 'Ben Heck Reverse Engineers Game Boy Printer': https://www.youtube.com/watch?v=43FfJvd-YP4 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hyperon**
Hyperon:
In particle physics, a hyperon is any baryon containing one or more strange quarks, but no charm, bottom, or top quark. This form of matter may exist in a stable form within the core of some neutron stars. Hyperons are sometimes generically represented by the symbol Y.
History and research:
The first research into hyperons happened in the 1950s and spurred physicists on to the creation of an organized classification of particles.
History and research:
The term was coined by French physicist Louis Leprince-Ringuet in 1953, and announced for the first time at the cosmic ray conference at Bagnères de Bigorre in July of that year, agreed upon by Leprince-Ringuet, Bruno Rossi, C.F. Powell, William B. Fretter and Bernard Peters.Today, research in this area is carried out on data taken at many facilities around the world, including CERN, Fermilab, SLAC, JLAB, Brookhaven National Laboratory, KEK, GSI and others. Physics topics include searches for CP violation, measurements of spin, studies of excited states (commonly referred to as spectroscopy), and hunts for exotic forms such as pentaquarks and dibaryons.
Properties and behavior:
Being baryons, all hyperons are fermions. That is, they have half-integer spin and obey Fermi–Dirac statistics. Hyperons all interact via the strong nuclear force, making them types of hadron. They are composed of three light quarks, at least one of which is a strange quark, which makes them strange baryons.
Properties and behavior:
Excited hyperon resonances and ground-state hyperons with a '*' included in their notation decay via the strong interaction. For Ω⁻ as well as the lighter hyperons this decay mode is not possible given the particle masses and the conservation of flavor and isospin necessary in strong interactions. Instead, these decay weakly with non-conserved parity. An exception to this is the Σ⁰ which decays electromagnetically into Λ on account of carrying the same flavor quantum numbers. The type of interaction through which these decays occur determine the average lifetime, which is why weakly decaying hyperons are significantly more long-lived than those that decay through strong or electromagnetic interactions.
List:
Notes: Since strangeness is conserved by the strong interactions, some ground-state hyperons cannot decay strongly. However, they do participate in strong interactions.
Λ0 may also decay on rare occurrences via these processes: Λ0 → p+ + e− + νe Λ0 → p+ + μ− + νμ Ξ0 and Ξ− are also known as "cascade" hyperons, since they go through a two-step cascading decay into a nucleon.
List:
The Ω− has a baryon number of +1 and hypercharge of −2, giving it strangeness of −3. It takes multiple flavor-changing weak decays for it to decay into a proton or neutron. Murray Gell-Mann's and Yuval Ne'eman's SU(3) model (sometimes called the Eightfold Way) predicted this hyperon's existence, mass and that it will only undergo weak decay processes. Experimental evidence for its existence was discovered in 1964 at Brookhaven National Laboratory. Further examples of its formation and observation using particle accelerators confirmed the SU(3) model. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TransducerML**
TransducerML:
TransducerML (Transducer Markup Language) or TML is a retired Open Geospatial Consortium standard developed to describe any transducer (sensor or transmitter) in terms of a common model, including characterizing not only the data but XML formed metadata describing the system producing that data.
Process:
TML captures when and where a sensor measurement or transmitter actuation occurs. Its system description describes not only individual data sources but also systems of components, including the specific types of components, the logical and physical relationships between the components, and the data produced or consumed by each of the components. Information captured includes manufacturer information, model numbers of specific items, serial numbers, how two devices may relate to each other both logically and physically (for example, a GPS system may provide location information for a camera and the GPS antenna may be located a certain distance away from the camera center), and the type of data being produced from those particular devices. Time stamps for each data measurement and other identifying information is also captured, making the TML system description particularly well suited for carrying data required for automated system discovery and to support data retrieval.
Process:
Metadata relating to archiving, indexing and cataloguing is an integral part of TML, since a TML data stream is designed to be self-contained and self-sufficient. Any information about the system, as well as information required to later parse and process the data, is captured in the TML system description. In addition to information about the system that produced the data, precise information about the data itself is captured. Data types, data sizes, ordering and arrangement, calibration information, units of measurement, precise time-tagging of individual groups of data, information about uncertainty, coordinate reference frames (where applicable) and physical phenomena relating to the data are among the details which are captured and retained. The TML system description therefore automatically tags all fields, which can later be stored in a registry for discovery.
Process:
TML system description fields include descriptions of the physical system, the data system and the data product. The data itself forms the other component of a TML data stream. The physical system description includes information such as model and serial number information about specific transducers and components of a system, system calibration information, system capabilities, installation information, owners and operators, and other information directly applicable to searches related to general data exchange independent of operating conditions. The data system description contains information about the specific transducers and components such as their behavior, responses to physical phenomena, sensitivity, and other operating parameters. The data product description addresses the specific data stream, such as data types, layouts, encoding, and other information necessary for the consumer of a TML data stream to interpret the stream.
Uses:
Using TML metadata enables a common metadata archive to be developed, which then permits discovery, search and retrieval based on a common technique. Regardless of the source of the data and its native complexity, metadata about the data generation system is readily at hand, and can be searched to discover specific systems of interest based on a number of criteria A key benefit to TML is that it enables correlation of measurements temporally by using a high-resolution clock tied to each individual data source, and models logical and physical relationships between multiple transducers in a system. Data from all elements of a system are integrated into a real-time data stream to substantially reduce the time required for processing and representation of that data, whether it pertains to metadata or to the primary data itself.
Uses:
Another key benefit to TML is that by bringing both data and metadata from multiple time-varying sources of data into a single stream in a common format, data and metadata archiving, retrieval, analysis and processing can be more easily performed across disparate hardware and software systems. The time tagging of both the data and metadata allows precise determination of the state of a system, and therefore whether its data is of interest, regardless of whether that system remains static or has elements removed, replaced or added. This permits searching for data at a finer granularity than previously possible, while still supporting higher-level data discovery if a user so desires, since the use of individual fields within a TML system description is optional.
Uses:
TML can process data from simple stationary in-situ transducers to high bandwidth dynamic remote devices such as a synthetic aperture radar system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Horizon Discovery**
Horizon Discovery:
Horizon Discovery Group plc (LSE: HZD) ("Horizon"), is a gene editing company that designs and engineers genetically modified cells and then applies them in research and clinical applications in human health.
Horizon offers human disease models and reagents derived from genetically-engineered cells that its customers may use to gain knowledge of the genetic drivers of disease; develop novel drugs or cell therapies targeted at these genetic drivers; and develop companion diagnostics that predict patient response in the clinic.
Horizon is headquartered in Cambridge, UK, and is listed on the London Stock Exchange’s AIM market under the ticker “HZD”.
Horizon signed agreements in December 2019 and January 2020 with Mammoth Biosciences to combine Mammoth's intellectual property in CRISPR with Horizon's expertise in Chinese hamster ovary cells.Perkin-Elmer acquired Horizon Discovery for $383 million (£296 million) in November 2020. Due to Perkin-Elmer split, since May 2023 Horizon Discovery is now part of Revvity, Inc.
Gene Editing:
Gene editing is the process by which specific changes are made to the sequence of a gene within the context of a host cell. By editing the code of a patient-derived cell to introduce or repair a genetic change believed to drive disease, a patient’s disease can be reproduced in a laboratory setting, letting researchers ask important biological questions of potential drugs or cell therapies earlier in the drug discovery process.
Gene Editing:
Through its gene editing platform, Horizon is able to alter genes in most human or mammalian cell lines. Horizon now offers over 23,000 cell line pairs that model the mutations found in genetically based diseases.
These ‘patients-in-a-test-tube’ are may be used to identify the effect of individual or compound genetic mutations on drug activity, patient responsiveness, and resistance, which may lead to prediction of which patient sub-groups will respond to currently available and future drug treatments.
Once built, engineered cells can act as product manufacturing engines, yielding related cell and reagent products that can be used as research tools or molecular diagnostic reference standards or as a means to generate advanced in vivo models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Steel (web browser)**
Steel (web browser):
Steel is a discontinued freeware web browser developed by Michael Kolb under the name kolbysoft. It is a fork of the default browser for Android, taking its WebKit-based layout engine and providing what is intended to be an easier and more "touch friendly" user interface.
Steel was one of the first Android applications to support automatic rotation based on the hardware's accelerometer and a virtual keyboard. This feature is now more common among Android applications.
In 2010 Skyfire purchased kolbysoft and the Steel browser.
Features:
Steel's user interface (UI) is intended to be more "touch friendly" than that of Android's default browser, and thus emphasizes ease of use on a touch screen. Back, forward, zoom, and bookmark-related buttons are all on the bottom toolbar. A URL-entry box is on the top toolbar, and beside it is a refresh/stop button, which displays if a page is fully loaded or still loading, respectively. Both toolbars are only shown if "pulled out" by two semi-transparent handles at the top and bottom of the display, and after a short period of not being used will hide themselves again. Until 0.0.4, Android's status bar containing system information was only shown when the top toolbar was out. Starting in 0.0.4, it is either visible or not depending on whether the browser is set to run in fullscreen mode.
Features:
Hardware controls Steel will switch between portrait and landscape modes based on which way the device running it is rotated. By contrast, the Android default browser at the time of release required the user to "Flip Orientation" in a menu or, on the T-Mobile G1, open the phone's keyboard.
In an attempt to avoid opening the aforementioned keyboard when possible, Steel has a virtual keyboard which appears when a user selects a text box or the URL entry box in the toolbar. It is modeled after that of the iPhone, and as of version 0.0.4 causes the device to vibrate when a key is successfully pressed.
Reception:
Steel's first public release received a 3-star rating from AppVee, praising its user interface and accelerometer support but pointing out that it was not at its development stage an application to rely on fully.Shortly after the release of 0.0.3, which added multiple features including the virtual keyboard, on December 13, 2008, Steel became the second most popular communication app in the Android Market, with an average rating of 4 (out of 5) stars from users. In May 2009 an Android Tapp review gave the Steel Browser a 4.5/5 rating, saying that it had a "hands down a better UI for the browser." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sympatry**
Sympatry:
In biology, two related species or populations are considered sympatric when they exist in the same geographic area and thus frequently encounter one another. An initially interbreeding population that splits into two or more distinct species sharing a common range exemplifies sympatric speciation. Such speciation may be a product of reproductive isolation – which prevents hybrid offspring from being viable or able to reproduce, thereby reducing gene flow – that results in genetic divergence. Sympatric speciation may, but need not, arise through secondary contact, which refers to speciation or divergence in allopatry followed by range expansions leading to an area of sympatry. Sympatric species or taxa in secondary contact may or may not interbreed.
Types of populations:
Four main types of population pairs exist in nature. Sympatric populations (or species) contrast with parapatric populations, which contact one another in adjacent but not shared ranges and do not interbreed; peripatric species, which are separated only by areas in which neither organism occurs; and allopatric species, which occur in entirely distinct ranges that are neither adjacent nor overlapping. Allopatric populations isolated from one another by geographical factors (e.g., mountain ranges or bodies of water) may experience genetic—and, ultimately, phenotypic—changes in response to their varying environments. These may drive allopatric speciation, which is arguably the dominant mode of speciation.
Evolving definitions and controversy:
The lack of geographic isolation as a definitive barrier between sympatric species has yielded controversy among ecologists, biologists, and zoologists regarding the validity of the term. As such, researchers have long debated the conditions under which sympatry truly applies, especially with respect to parasitism. Because parasitic organisms often inhabit multiple hosts during a life cycle, evolutionary biologist Ernst Mayr stated that internal parasites existing within different hosts demonstrate allopatry, not sympatry. Today, however, many biologists consider parasites and their hosts to be sympatric (see examples below). Conversely, zoologist Michael J. D. White considered two populations sympatric if genetic interbreeding was viable within the habitat overlap. This may be further specified as sympatry occurring within one deme; that is, reproductive individuals must be able to locate one another in the same population in order to be sympatric.
Evolving definitions and controversy:
Others question the ability of sympatry to result in complete speciation: until recently, many researchers considered it nonexistent, doubting that selection alone could create disparate, but not geographically separated, species. In 2003, biologist Karen McCoy suggested that sympatry can act as a mode of speciation only when "the probability of mating between two individuals depend[s] [solely] on their genotypes, [and the genes are] dispersed throughout the range of the population during the period of reproduction". In essence, sympatric speciation does require very strong forces of natural selection to be acting on heritable traits, as there is no geographic isolation to aid in the splitting process. Yet, recent research has begun to indicate that sympatric speciation is not as uncommon as was once assumed.
Syntopy:
Syntopy is a special case of sympatry. It means the joint occurrence of two species in the same habitat at the same time. Just as the broader term sympatry, "syntopy" is used especially for close species that might hybridise or even be sister species. Sympatric species occur together in the same region, but do not necessarily share the same localities as syntopic species do. Areas of syntopy are of interest because they allow to study how similar species may coexist without outcompeting each other.
Syntopy:
As an example, the two bat species Myotis auriculus and M. evotis were found to be syntopic in North America. In contrast, the marbled newt and the northern crested newt have a large sympatric range in western France, but differ in their habitat preferences and only rarely occur syntopically in the same breeding ponds.
Sympatric speciation:
The lack of geographic constraint in isolating sympatric populations implies that the emerging species avoid interbreeding via other mechanisms. Before speciation is complete, two diverging populations may still produce viable offspring. As speciation progresses, isolating mechanisms – such as gametic incompatibility that renders fertilization of the egg impossible – are selected for in order to increase the reproductive divide between the two populations.
Sympatric speciation:
Species discrimination Sympatric groups frequently show a greater ability to discriminate between their own species and other closely related species than do allopatric groups. This is shown in the study of hybrid zones. It is also apparent in the differences in levels of prezygotic isolation (by factors that prevent formation of a viable zygote) in both sympatric and allopatric populations. There are two main theories regarding this process: 1) differential fusion, which suggests that only populations with a keen ability to discriminate between species will persist in sympatry; and 2) character displacement, which implies that distinguishing characteristics will be heightened in areas where the species co-occur in order to facilitate discrimination.
Sympatric speciation:
Reinforcement Reinforcement is the process by which natural selection reinforces reproductive isolation. In sympatry, reinforcement increases species discrimination and sexual adaptation in order to avoid maladaptive hybridization and encourage speciation. If hybrid offspring are either sterile or less-fit than non-hybrid offspring, mating between members of two different species will be selected against. Natural selection decreases the probability of such hybridization by selecting for the ability to identify mates of one's own species from those of another species.
Sympatric speciation:
Reproductive character displacement Reproductive character displacement strengthens the reproductive barriers between sympatric species by encouraging the divergence of traits that are crucial to reproduction. Divergence is frequently distinguished by assortative mating between individuals of the two species. For example, divergence in the mating signals of two species will limit hybridization by reducing one's ability to identify an individual of the second species as a potential mate. Support for the reproductive character displacement hypothesis comes from observations of sympatric species in overlapping habitats in nature. Increased prezygotic isolation, which is associated with reproductive character displacement, has been observed in cicadas of genus Magicicada, stickleback fish, and the flowering plants of the genus Phlox.
Sympatric speciation:
Differential fusion An alternative explanation for species discrimination in sympatry is differential fusion. This hypothesis states that of the many species have historically come into contact with one another, the only ones that persist in sympatry (and thus are seen today) are species with strong mating discrimination. On the other hand, species lacking strong mating discrimination are assumed to have fused while in contact, forming one distinct species.
Sympatric speciation:
Differential fusion is less widely recognized than character displacement, and several of its implications are refuted by experimental evidence. For example, differential fusion implies greater postzygotic isolation among sympatric species, as this functions to prevent fusion between the species. However, Coyne and Orr found equal levels of postzygotic isolation among sympatric and allopatric species pairs in closely related Drosophila. Nevertheless, differential fusion remains a possible, though not complete, contributor to species discrimination.
Examples:
Sympatry has been increasingly evidenced in current research. Because of this, sympatric speciation – which was once highly debated among researchers – is progressively gaining credibility as a viable form of speciation.
Examples:
Orca: partial sympatry Several distinct types of killer whale (Orcinus orca), which are characterized by an array of morphological and behavioral differences, live in sympatry throughout the North Atlantic, North Pacific and Antarctic oceans. In the North Pacific, three whale populations – called "transient", "resident", and "offshore" – demonstrate partial sympatry, crossing paths with relative frequency. The results of recent genetic analyses using mtDNA indicate that this is due to secondary contact, in which the three types encountered one another following the bidirectional migration of "offshore" and "resident" whales between the North Atlantic and North Pacific. Partial sympatry in these whales is, therefore, not the result of speciation. Furthermore, killer whale populations that consist of all three types have been documented in the Atlantic, evidencing that interbreeding occurs among them. Thus, secondary contact does not always result in total reproductive isolation, as has often been predicted.
Examples:
Great spotted cuckoo and magpie: brood parasitism The parasitic great spotted cuckoo (Clamator glandarius) and its magpie host, both native to Southern Europe, are completely sympatric species. However, the duration of their sympatry varies with location. For example, great spotted cuckoos and their magpie hosts in Hoya de Gaudix, southern Spain, have lived in sympatry since the early 1960s, while species in other locations have more recently become sympatric. Great spotted cuckoos, when in South Africa, are sympatric with at least 8 species of starling and 2 crows, pied crow and Cape crow.The great spotted cuckoo exhibits brood parasitism by laying a mimicked version of the magpie egg in the magpie's nest. Since cuckoo eggs hatch before magpie eggs, magpie hatchlings must compete with cuckoo hatchlings for resources provided by the magpie mother. This relationship between the cuckoo and the magpie in various locations can be characterized as either recently sympatric or anciently sympatric. The results of an experiment by Soler and Moller (1990) showed that in areas of ancient sympatry (species in cohabitation for many generations), magpies were more likely to reject most of the cuckoo eggs, as these magpies had developed counter-adaptations that aid in identification of egg type. In areas of recent sympatry, magpies rejected comparatively fewer cuckoo eggs. Thus, sympatry can cause coevolution, by which both species undergo genetic changes due to the selective pressures that one species exerts on the other.
Examples:
Acromyrmex ant: isolation of fungal gardens Leafcutter ants protect and nourish various species of fungus as a source of food in a system known as ant-fungus mutualism. Leafcutter ants belonging to the genus Acromyrmex are known for their mutualistic relationship with Basidiomycete fungi. Ant colonies are closely associated with their fungus colonies, and may have co-evolved with a consistent vertical lineage of fungi in individual colonies. Ant populations defend against the horizontal transmission of foreign fungi to their fungal colony, as this transmission may lead to competitive stress on the local fungal garden. Invaders are identified and removed by the ant colony, inhibiting competition and fungal interbreeding. This active isolation of individual populations helps maintain the genetic purity of the fungal colony, and this mechanism may lead to sympatric speciation within a shared habitat. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homeomorphism (graph theory)**
Homeomorphism (graph theory):
In graph theory, two graphs G and G′ are homeomorphic if there is a graph isomorphism from some subdivision of G to some subdivision of G′ . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in illustrations), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if they are homeomorphic in the topological sense.
Subdivision and smoothing:
In general, a subdivision of a graph G (sometimes known as an expansion) is a graph resulting from the subdivision of edges in G. The subdivision of some edge e with endpoints {u,v } yields a graph containing one new vertex w, and with an edge set replacing e by two new edges, {u,w } and {w,v }.
Subdivision and smoothing:
For example, the edge e, with endpoints {u,v }: can be subdivided into two edges, e1 and e2, connecting to a new vertex w: The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges (e1, e2) incident on w, removes both edges containing w and replaces (e1, e2) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree-2 (i.e., 2-valent) vertices can be smoothed.
Subdivision and smoothing:
For example, the simple connected graph with two edges, e1 {u,w } and e2 {w,v }: has a vertex (namely w) that can be smoothed away, resulting in: Determining whether for graphs G and H, H is homeomorphic to a subgraph of G, is an NP-complete problem.
Barycentric subdivisions The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph. This procedure can be repeated, so that the nth barycentric subdivision is the barycentric subdivision of the n−1st barycentric subdivision of the graph. The second such subdivision is always a simple graph.
Embedding on a surface:
It is evident that subdividing a graph preserves planarity. Kuratowski's theorem states that a finite graph is planar if and only if it contains no subgraph homeomorphic to K5 (complete graph on five vertices) or K3,3 (complete bipartite graph on six vertices, three of which connect to each of the other three).In fact, a graph homeomorphic to K5 or K3,3 is called a Kuratowski subgraph.
Embedding on a surface:
A generalization, following from the Robertson–Seymour theorem, asserts that for each integer g, there is a finite obstruction set of graphs L(g)={Gi(g)} such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the Gi(g) . For example, L(0)={K5,K3,3} consists of the Kuratowski subgraphs.
Example:
In the following example, graph G and graph H are homeomorphic.
If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H, then G′ and H′ have a similar graph drawing: Therefore, there exists an isomorphism between G' and H', meaning G and H are homeomorphic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jaszczak phantom**
Jaszczak phantom:
A Jaszczak phantom (pronounced "JAY-zak") aka Data Spectrum ECT phantom is an imaging phantom used for validating scanner geometry, 3D contrast, uniformity, resolution, attenuation and scatter correction or alignment tasks in nuclear medicine. It is commonly used in academic centers and hospitals to characterize a SPECT or some gamma camera systems for quality control purposes. It is used for accreditation by clinical and academic facilities for the American College of Radiology.The phantom was developed by Ronald J. Jaszczak of Duke University, and was filed for a patent in 1982. It is a cylinder containing fillable inserts that is often used with a radionuclide such as Technetium-99m or Fluorine-18.Although the phantom can be used for acceptance testing, the National Electrical Manufacturers Association recommends a 30 million count acquisition and section reconstruction of the phantom be performed quarterly.In 1981 Ronald J. Jaszczak founded Data Spectrum Corporation which manufactures the Jaszczak phantom and several other nuclear imaging tools, such as the Hoffman Brain phantom.
Structure and composition:
Jaszczak phantoms consist of a main cylinder or tank made of acrylic plastic with several inserts. The circular phantom comes in two varieties: flanged and flangeless. The latter is recommended by the American College of Radiology for accreditation of nuclear medicine departments. All Jaszczak phantoms have six solid spheres and six sets of 'cold' rods. In flanged models, the sizes of the spheres vary. The number of rods in each set depends on the size of the rod in that set as different models of the phantom have rods of different sizes. In flangeless models, the diameters of the spheres are 9.5, 12.7, 15.9, 19.1, 25.4 and 31.8 mm, while the rod diameters are 4.8, 6.4, 7.9, 9.5, 11.1 and 12.7 mm. Both solid spheres and rod inserts mimic cold lesions in a hot background. Spheres are used to measure the image contrast while the rods are used to investigate the image resolution in SPECT systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Zinc-activated ion channel**
Zinc-activated ion channel:
Zinc-activated ion channel (ZAC), is a human protein encoded by the ZACN gene. ZAC forms a cation-permeable ligand-gated ion channel of the "Cys-loop" superfamily. The ZAC gene is present in humans and dogs, but no ortholog is thought to exist in the rat or mouse genomes.ZAC mRNA is expressed in prostate, thyroid, trachea, lung, brain (adult and fetal), spinal cord, skeletal muscle, heart, placenta, pancreas, liver, kidney and stomach. The endogenous ligand for ZAC is thought to be Zn2+, although ZAC has also been found to activate spontaneously. The function of spontaneous ZAC activation is unknown. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**English Language Skills Assessment**
English Language Skills Assessment:
The English Language Skills Assessment (ELSA) is a group of tests designed to measure English language proficiency of subjects. The test is designed for non-native speakers, with different levels of testing available from beginners to advanced.The tests can be utilized to track progress among those studying English or to measure proficiency for employment or education where English language skills are required. The tests are intended for an international audience and are available in British English or American English. The tests are utilized by such educational organizations as the Australian Council for Educational Research to help predict student success and are compulsory at The University of the South Pacific. It is used by international businesses such as BASF, Unilever and DaimlerChrysler. Its usage is mandatory in Germany and Poland as part of the re-training programs for unemployed.
FELSA:
A variant of elsa, the Foundational English Language Skills Assessment (FELSA), has been developed for all age groups with a special focus on speakers who correspond to level A1 or A2 of the Common European Framework of Reference for Languages, who may have slight conversational English language familiarity but would not ordinarily be able to succeed in school, business or travel in English. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ecosystem of the North Pacific Subtropical Gyre**
Ecosystem of the North Pacific Subtropical Gyre:
The North Pacific Subtropical Gyre (NPSG) is the largest contiguous ecosystem on earth. In oceanography, a subtropical gyre is a ring-like system of ocean currents rotating clockwise in the Northern Hemisphere and counterclockwise in the Southern Hemisphere caused by the Coriolis Effect. They generally form in large open ocean areas that lie between land masses.
Ecosystem of the North Pacific Subtropical Gyre:
The NPSG is the largest of the gyres as well as the largest ecosystem on our planet. Like other subtropical gyres, it has a high-pressure zone in its center. Circulation around the center is clockwise around this high-pressure zone. Subtropical gyres make up 40% of the Earth’s surface and play critical roles in carbon fixation and nutrient cycling. This particular gyre covers most of the Pacific Ocean and comprises four prevailing ocean currents: the North Pacific Current to the north, the California Current to the east, the North Equatorial Current to the south, and the Kuroshio Current to the west. Its large size and distance from shore has caused the NPSG to be poorly sampled and thus poorly understood.
Ecosystem of the North Pacific Subtropical Gyre:
The life processes in open-ocean ecosystems are a sink for the atmosphere’s increasing CO2. Gyres make up a large proportion, approximately 75%, of what we refer to as the open ocean, or the area of the ocean that does not consist of coastal areas. They are considered oligotrophic, or nutrient poor because they are far from terrestrial runoff. These regions were once thought to be homogenous and static habitats. However, there is increasing evidence that the NPSG exhibits substantial physical, chemical, and biological variability on a variety of time scales. Specifically, the NPSG exhibits seasonal and interannual variations in primary productivity (simply defined as the production of new plant material), which is important for the uptake of CO2.
Ecosystem of the North Pacific Subtropical Gyre:
The NPSG is not only a sink for CO2 in the atmosphere, but also other pollutants. As a direct result of this circular pattern, gyres act like giant whirlpools and become traps for anthropogenic pollutants, such as marine debris. The NPSG has become recognized for the large quantity of plastic debris floating just below the surface in the center of the gyre. This area has recently received a lot of media attention and is commonly referred to as the Great Pacific Garbage Patch.
History of discovery:
The NPSG is not often sampled because of its distance from the coast and its shortage of marine life. These vast and deep ocean waters, far from the influence of land, have historically been considered the oceanic equivalent of terrestrial deserts, with low standing stocks of biomass and low production rates. This perspective is derived from a dearth of comprehensive investigation of central gyre habitats. Over the past two decades these views have been challenged with a newfound understanding of the dynamics of the NPSG.
History of discovery:
During the early days of marine exploration, HMS Challenger (1872–1876), on its leg from Yokohama to Honolulu, collected plant and animal specimens as well as numerous seawater samples. The goals of this expedition were to determine the chemical composition of seawater and the organic matter in suspension and to study the distribution and abundance of various communities of organisms. The motivation for studying open ocean ecosystems has changed over time, whereas today more modern studies focus on biodiversity and the effects of climate on ecosystem dynamics. Today, the Hawaii Ocean Time-series (HOT) program has assembled the largest and most comprehensive ecological data set for the NPSG and is scheduled to continue to the next millennium. Programs like HOT have debunked the hypothesis that this ecosystem is static and homogenous, finding that the NPSG exhibits dynamic seasonal patterns separating it from other open ocean systems.
Physical characteristics:
The NPSG is the largest of the open ocean habitats and is considered to be the Earth’s largest contiguous biome. This great anticyclonic circulation feature extends from 15°N to 35°N latitude and from 135°E to 135°W longitude. Its surface area spans approximately 2 x 107 km2. Its western portion, west of 180° longitude, has greater physical variability than the eastern portion. This variability, where different weather patterns affect subregions differently, is due to the large dimensions of this gyre.This large variability is caused by discrete eddies, near-inertial motions, and internal tides. Climate patterns such as the North Pacific Gyre Oscillation (NPGO), El Nino/Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO) affect the interannual variability in primary productivity in the NPSG. DiLorenzo et al., 2008 These conditions can have profound effects on biological processes within this habitat, they have the ability to shift sea surface temperature (SST), chlorophyll patterns, nutrient patterns, oxygen concentrations, mixed layer depths, and thus the carrying capacity (amount of life this habitat can carry) of the NPSG.
Nutrient cycling:
Low nutrient concentrations and thus a low density of living organisms characterize the surface waters of the NPSG. The low biomass results in clear water, allowing photosynthesis to occur to a substantial depth. The NPSG is classically described as a two-layered system. The upper, nutrient-limited layer accounts for most of the primary production, supported primarily by recycled nutrients. The lower layer has nutrients more readily available, but photosynthesis is light-limited.In open-ocean systems, biological production depends on intense nutrient recycling within the euphotic (sunlit) zone, with only a small fraction supported by the input of “new” nutrients. Previously there was a perception that the NPSG was a marine desert and that “new” nutrients were not commonly added to this system. The outlook has changed, as scientists have begun to have a better understanding of this habitat. Although fairly high rates of primary production are maintained through rapid recycling of nutrients, physical processes such as internal waves and tides, cyclonic mesoscale eddies, wind-driven Ekman pumping, and atmospheric storms may carry in new nutrients.Nutrients that do not get used up on the surface will eventually sink down and nourish the seafloor habitat. The deep benthic habitats of the ocean gyres have been thought to typically consist of some of the most food-poor regions on the planet. One of the sources of nutrients to this deep ocean habitat is marine snow. Marine snow consists of detritus, dead organic matter, which falls from the surface waters where productivity is highest and exports carbon and nitrogen from the surface mixed layer to the deep ocean. Data on the abundance of marine snow to the deep ocean floor is lacking in this large ecosystem. However, Pilskaln et al. found that in the NPSG, marine snow was at a higher abundance than expected and were surprisingly comparable to a deep coastal upwelling system.
Nutrient cycling:
Higher nutrient value may be because of Rhizosolenia mats, which also play an important role in contributing to marine snow in subtropical gyres. These are generally multi-species associations of Rhizosolenia species of diatoms. This larger phytoplankton may reach up to tens of centimeters in size. These mats are particularly abundant in the NPSG. Their abundance in this ecosystem suggests a higher flux of nutrients in the NPSG than was predicted in classic theories.
Nutrient cycling:
While N is transported deeper by this mechanism, the surface waters are potentially cut off from this source. Nitrogen must be available for life at the surface. In order to account for this lack of nitrogen to the surface, there are organisms that are capable of nitrogen fixation in the NPSG. Trichodesmium is one species capable of nitrogen fixation that is found in many surface plankton blooms. Nitrogen fixation is the process where inert N2 is taken from the atmosphere and converted into a nitrogen compound that is available to organisms for use. In many oligotrophic marine ecosystems, nitrogen fixation is a common source of nitrogen.
Nutrient cycling:
Vertically migrating zooplankton can also actively transport nutrients to different zones of the water column. Zooplankton feed in the surface waters at night, and then by day release fecal pellets to the midwaters, which can transport C, N, and P to the deeper waters. In the NPSG the zooplankton community is not static but fluctuates seasonally and is dominated by copepods, euphausiids, and chaetognaths.Recently, classic theories about the lack of nutrients in the NPSG have been disproven and new theories suggest that the ecosystem actually is dynamic and characterized by strong seasonal, interannual, and even decadal variability It has also been deemed highly sensitive to climate change, scientists have observed increases in water column stratification and decreased inorganic nutrient availability. These changes are proposed as driving mechanisms that are changing the current trend in phytoplankton community structure from eukaryotic to prokaryotic populations, as these simpler organisms can withstand lower nutrient supply. Zooplankton and phytoplankton represent less than 10% of living organisms in this region, and it is now well documented that the NPSG is a “microbial ecosystem”.
Microbial community:
Microbial organisms make up the majority of the primary producers in the NPSG. They are autotrophic, meaning they capture their own “food” from sunlight and chemicals, including CO2. These organisms comprise the base of the food chain, and thus their presence in an ecosystem is fundamental. In the NPSG, primary productivity is often described as low.
Microbial community:
Before 1978, scientists hypothesized that diatoms dominated plankton populations in the NPSG. The primary consumers were expected to be relatively large mesozooplankton. It is now well known that most of the algae in the NPSG are actually bacteria (unicellular organisms), dominated by cyanobacteria, or blue-green algae. These simple organisms make up the majority of the standing stock of photosynthesizing marine life in this ecosystem. Scientists have also recently discovered Archaea (also a single-celled microorganism, but more similar to a eukaryote than bacteria) genes in the NPSG, suggesting that additional diversity exists in this habitat. Many microorganisms may exist in this gyre because small body size has a competitive advantage in the ocean for resource (light and nutrients) acquisition. In the contemporary view of the NPSG, the microbial food web is always present, whereas the larger eukaryote-grazer food chain is seasonal and ephemeral.
Eukaryotic plankton community:
Eukaryotic plankton in the gyre is dependent on “new” nutrients coming in from physical weather patterns. The classic two-layered model discussed in previous sections considers the upper layer to be equivalent to a “spinning wheel,” with little export of nutrients because they are constantly recycled. This model does not allow for the input of new nutrients, which is problematic because this would make any rapid increase, or bloom of phytoplankton impossible. Despite ever-present nutrient limitation in the upper portion, plankton biomass and rates of primary production have considerable temporal variability and do produce blooms in the NPSG.This interannual variability has been attributed to alterations in upper ocean nutrient supply stemming from physical variations due to ENSO and PDO. Based on new data, it now appears that present rates of primary production in these low nutrient regions are much greater than had been considered, and can vary significantly on time scales ranging from daily to interdecadal. In the spring, rapid increases in surface phytoplankton are occasionally observed in association with cyclonic mesoscale eddies or intense atmospheric disturbances, both physical processes that bring in new nutrients. In the summer, blooms are seen more regularly and are typically dominated by diatoms and cyanobacteria. These regular summer blooms may be caused by variations in the PDO. Summer blooms have been observed in these waters as long as research vessels have been frequenting them. All of these blooms have been seen in the eastern part of the NSPG with none reported west of 160° W. Hypotheses to explain this phenomenon are that the gyre is characterized by low phosphate, but that the bloom region of the eastern NPSG has considerably higher phosphate concentrations than the western.Variations in primary production in the NPSG can significantly affect nutrient cycling, food-web dynamics, and global elemental fluxes. The size distribution of pelagic primary producers determines both the composition and magnitude of the exported nutrients to the deeper waters. This in turn affects the communities that live in the deeper waters of this system.
Mesopelagic community:
The mesopelagic zone is sometimes referred to as the twilight zone; it extends from 200m to around 1000m. In the deeper layers of the NPSG, species higher up on the food chain will migrate vertically or horizontally within or in and out of the gyre. Based on analyses of the zooplankton community, the Central North Pacific has a high species diversity (or high number of species) and high equitability (meaning relatively equal numbers of each exist). There is also a low degree of seasonal variability of densities of zooplankton.Studies of mesopelagic fishes of central subtropical waters are scarce. The few studies that do exist found that mesopelagic fish species are not uniformly distributed throughout the subtropical Pacific Ocean. Their geographic ranges conform to patterns shown by zooplankton. Some of the species found are restricted to these low-productivity central gyres. Some of the families of fish that are highly represented are Mytophids, Gonostomatids, Photichthyids, Sternoptychids, and Melamphaids. Our understanding of the mesopelagic community of the NPSG suffers from an insufficiency of data due to the difficulty of accessing the deeper zones of this system.
Benthic community:
The deepest community in the NPSG is the benthic community. At the depths of the gyre lies a sea floor of fine-grained clay sediments. This sediment is home to a community of organisms, which generally receive their nutrients as a “rain” of productivity sinking from above. At depth under the gyre lies one of the most food-poor areas on the planet, which therefore supports very low densities and biomass of benthic infauna, or animals residing in the sediment. In the sediment itself, nutrients generally decline with depth, including carbon, chlorophyll, and nitrogen. Density of the benthic infauna is consistent with this nutrient pattern. Infauna are typically found in the shallower layers of sediments where the sediment-water interface lies and generally decrease in number with increasing depth in the sediment. Bacteria in the sediment show this pattern as well as macrofauna (infaunal organisms >0.5mm), which are dominated by agglutinating (soft-bodied) foraminifera and nematodes. Other prominent macrofauna found in the sediment are calcareous foraminifera, copepods, polychaetes, and bivalves. These benthic organisms rely heavily on the supply of nutrients that settle to the sea floor. Any change in primary production at the surface could pose a major threat to these organisms, as well as cause other potential negative outcomes to other parts of the NPSG.
Future and importance of the NPSG:
Until recently the NPSG was considered to be a static part of a vast global marine desert. Recent discoveries have proved that this system is dynamic and contains physical, chemical, and biological variability on a variety of time scales. With the current changing climate, patterns in the atmosphere are shifting and causing changes in primary production in the NPSG. Variations in primary productivity can affect the ocean carbon cycle and potentially atmospheric CO2 and climate, because such variations can change the amount of carbon that is stored in the subsurface layers of the oceans. Because the NPSG is the largest contiguous biome on earth, it is not only important to a community of organisms, but also the rest of the planet.
Future and importance of the NPSG:
The NPSG has received copious attention because of another issue it currently faces. The eddy effects of the gyre serve to retain pollutants in its center. If a pollutant gets trapped in a current that is headed toward a gyre, it will stay there indefinitely or as long as the life of the pollutant. One such pollutant that is persistent and common in the NPSG is plastic debris. The NPSG forces debris into its central area. This phenomenon has recently given this gyre the nickname, “The Pacific Garbage Patch.” The mean abundance and weight of plastic pieces in this area are currently the largest observed in the Pacific Ocean. It is rumored that this plastic “soup” is anywhere from the size of Texas to the size of the US. With increasing interest in pollution and climate change, the NPSG has gained more attention. It is important that our knowledge of this system continue to flourish for these reasons, as well as solely for the understanding of the world’s largest ecosystem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fly-killing device**
Fly-killing device:
A fly-killing device is used for pest control of flying insects, such as houseflies, wasps, moths, gnats, and mosquitoes.
Flyswatter:
A flyswatter (or fly-swat, fly swatter) usually consists of a small rectangular or round sheet of a lightweight, flexible, vented material (usually thin metallic, rubber, or plastic mesh) around 10 cm (4 in) across, attached to a handle about 30 to 60 cm (1 to 2 ft) long made of a lightweight material such as wire, wood, plastic, or metal. The venting or perforations minimize the disruption of air currents, which are detected by an insect and allow escape, and also reduces air resistance, making it easier to hit a fast-moving target.
Flyswatter:
A flyswatter is ideally lightweight and stiff, allowing quick acceleration to overcome the fast reaction time of the fly (six to ten times faster than a human), while also minimizing damage caused by hitting other objects. The flyswatter usually works by mechanically crushing the fly against a hard surface, after the user has waited for the fly to land somewhere. However, users can also injure or stun an airborne insect mid-flight by whipping the swatter through the air at an extreme speed.
Flyswatter:
History The abeyance of insects by use of short horsetail staffs and fans is an ancient practice, dating back to the Egyptian pharaohs. The earliest flyswatters were in fact nothing more than some sort of striking surface attached to the end of a long stick. An early patent on a commercial flyswatter was issued in 1900 to Robert R. Montgomery who called it a fly-killer.
Flyswatter:
Montgomery sold his patent to John L. Bennett, a wealthy inventor and industrialist who made further improvements on the design.The origin of the name "flyswatter" comes from Dr. Samuel Crumbine, a member of the Kansas board of health, who wanted to raise public awareness of the health issues caused by flies. He was inspired by a chant at a local Topeka softball game: "swat the ball". In a health bulletin published soon afterwards, he exhorted Kansans to "swat the fly". In response, a schoolteacher named Frank H. Rose created the "fly bat", a device consisting of a yardstick attached to a piece of screen, which Crumbine named "the flyswatter".
Fly gun:
The fly gun (or flygun), a derivative of the flyswatter, uses a spring-loaded plastic projectile to mechanically "swat" flies. Mounted on the projectile is a perforated circular disk, which, according to advertising copy, "won't splat the fly". Several similar products are sold, mostly as toys or novelty items, although some maintain their use as traditional fly swatters.Another gun-like design consists of a pair of mesh sheets spring loaded to "clap" together when a trigger is pulled, squashing the fly between them. In contrast to the traditional flyswatter, such a design can only be used on an insect in mid-air.
Fly bottle:
A fly bottle or glass flytrap is a passive trap for flying insects. In the Far East, it is a large bottle of clear glass with a black metal top with a hole in the middle. An odorous bait, such as pieces of meat, is placed in the bottom of the bottle. Flies enter the bottle in search of food and are then unable to escape because their phototaxis behavior leads them anywhere in the bottle except to the darker top where the entry hole is.A European fly bottle is more conical, with small feet that raise it to 1.25 cm (0.5 in), with a trough about a 2.5 cm (1 in) wide and deep that runs inside the bottle all around the central opening at the bottom of the container. In use, the bottle is stood on a plate and some sugar is sprinkled on the plate to attract flies, who eventually fly up into the bottle. The trough is filled with beer or vinegar, into which the flies fall and drown. In the past, the trough was sometimes filled with a dangerous mixture of milk, water, and arsenic or mercury chloride.Variants of these bottles are the agricultural fly traps used to fight the Mediterranean fruit fly and the olive fly, which have been in use since the 1930s. They are smaller, without feet, and the glass is thicker for rough outdoor usage, often involving suspension in a tree or bush. Modern versions of this device are often made of plastic, and can be purchased in some hardware stores. They can also be improvised from disposable plastic drink bottles.
Disposable fly traps:
Disposable fly traps are small “use and throw away” fly traps. The traps are disposable plastic bags containing some attractant, generally made of flavoring agents that are non-toxic. Water and direct sunlight are used to activate the attractant, which emits a smell to lure the flies. Insects enter the trap and drown in the water inside.
Glue board:
A glue board is a capture device with a strong adhesive. A small card covered in sticky adhesive is situated in an enclosure so that when the flies come into contact with it, they stick to it and die. A reusable glue board may be renewed through the use of vegetable oil, and then the removal of the oil with dishwashing detergent and a rinse of water. Alternatively, the card is disposed of and completely replaced periodically.
Flypaper:
Flypaper (also known as fly paper, fly sticker, fly strip, fly ribbon, or fly tape) attracts flies to adhesive so that they can be trapped. The exposed adhesive strip makes it more stick-prone than an enclosed glue board. To avoid accidental entanglement with humans, the strips are often hung in relatively inaccessible spaces, such as near ceilings. One type of fly strip is packaged in a small cardboard tube with a pin on the top. It is used by pulling the pin off the top (usually covered with wax), removing the adhesive "fly strip" and using the pin to attach it to a ceiling, with the tube dangling below as a small weight. Flypaper is not reused, but is replaced when it loses effectiveness.
Flypaper:
Flypaper is often impregnated with a slightly odorous chemical to attract more flies. The attractiveness of flypaper to other insects (such as mosquitoes and biting midges) is sometimes enhanced by shining a small portable electric light on the sticky surface.
Bug vacuum:
A bug vacuum (bug vac or aspirator) is a type of small but powerful portable vacuum cleaner, usually with internal batteries. The motor starts quickly and generates strong suction, trapping the flying insect inside the device. The insect may be captured on an adhesive internal surface, or simply held inside the device until it dehydrates and dies.
Bug vacuum:
Some bug vacuums feature non-lethal designs which keep trapped insects inside, but do not otherwise harm them, allowing their later release. These devices are popular with entomologists and persons who wish to avoid the killing of insects.A related device powered by mouth suction is called a pooter, and is used by entomologists and students to capture small organisms for study.
Fan-based trap:
This design uses a continuously running electric fan to suck in flying insects (especially mosquitos and gnats, which are weak fliers), which are then trapped by a fine mesh grid or bag. Unable to escape the constant airflow, the insects quickly dehydrate and die. Some variant designs use carbon dioxide, ultraviolet light, or chemical scent to attract insects to the trap. Other designs rely on the natural carbon dioxide or scents emitted by people, pets, or livestock to attract pests, and simply collect flying insects as they wander close enough to be sucked in. In addition, the continuous breeze produced by a common electric fan has been found to discourage mosquitos from landing and biting, even without trapping or killing the insects.
Bug zapper:
A bug zapper electric grid (fly zapper) kills insects by electrocution from high voltage on adjacent metallic grids. Bug zappers are generally small appliances intended for use in a fixed location, as distinguished from hand held electric flyswatters.
Electric flyswatter:
An electric flyswatter (sometimes called mosquito bat, racket zapper, or zap racket) is a battery-powered, handheld bug zapper that resembles a tennis racket invented by Tsao-i Shih in 1996. The handle contains a battery-powered high-voltage generator. The circuit is a minimalist self-oscillating voltage booster, that is small, low-cost, composed of very few components, and continuing to operate when the battery is depleted to a fraction of its original voltage, a so-called Joule thief circuit.The flyswatter generates a voltage of between 500 and 3,000 volts (V) when a button switch is held down; the voltage is applied between two grid or mesh electrodes. When the body of a fly bridges the gap between the electrodes, a current passes through the fly. A capacitor attached to the electrodes discharges during the spark, and this initial discharge usually stuns or kills the fly. If the button is kept depressed, the continuous current will rapidly kill and incinerate a small fly.
Electric flyswatter:
In some swatters, an inner expanded metal or wire grid mesh is sandwiched between two outer arrays of rods, designed so that fingers are not able to poke through and bridge the electrodes, while small insects can. Other swatters have an array of rods, with high voltage between any rod and its neighbor.
Electric flyswatter:
Most electric flyswatters conform to electrical safety standards for humans: A limit on the net charge stored in the capacitor: A discharge of less than 45 microcoulombs (µC) is considered safe, even in the unlikely scenario that the current from a flyswatter would be flowing from one arm to the other arm, partly through the heart. For example, the capacitor of a 1000 V flyswatter should be less than 45 nanofarads (nF). Due to this precaution for human safety, the initial shock is usually inadequate to kill larger insects, but will still stun them for long enough that they can be disposed of.
Electric flyswatter:
A limit on the current after the initial discharge: The maximal continuous current of most flyswatters is less than 5 milliamperes (mA). This current is safe, even when flowing from one arm to the other arm of a human.An advantage over conventional flyswatters is that the electrical models do not have to crush the fly against a surface to kill it, avoiding the smeared mess this can create. Electric swatters kill insects when airborne, not resting on a surface. Insects on a surface will start flying as the swatter approaches, so it can strike them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ultraviolet light therapy**
Ultraviolet light therapy:
Ultraviolet light therapy or ultraviolet phototherapy is a treatment for psoriasis, atopic skin disorder, vitiligo and other skin diseases. There are two main treatments: UVB that is the most common, and PUVA.
There are four UVB types of lamps: Fluorescnt Broad-Band UVB that emit 280-330 nanometer, Fluorescent Narrow-Band that emit 312 nanometer, Excimer that emit 308 nanometer and LED that emit 290-300 nanometer.
PUVA means UVA + psoralen. It consists of irradiation of the skin with the UVA ultraviolet light, from a fluorescent bulb or LED lamps.
Tanning beds are used both in dermatology practices for the treatment of cosmetic skin conditions (such as psoriasis, acne, eczema and vitiligo) and in indoor tanning salons for cosmetic tanning.
Ultraviolet light therapy:
Typical treatment regimens involve short exposure to UVB rays 3 to 5 times a week at a hospital or clinic, and repeated sessions may be required before results are noticeable. Almost all of the conditions that respond to UVB light are chronic problems, so continuous treatment is required to keep those problems in check. Home UVB systems are common solutions for those whose conditions respond to treatment. Home systems permit patients to treat themselves every other day (the ideal treatment regimen for most) without the frequent, costly trips to the office/clinic and back, mainly when the area is small, and the price of the lamp is low.
Side effects:
Side-effects may include itching and redness of the skin due to UVB exposure, and possibly sunburn, if patients do not minimize exposure to natural UV rays during treatment days. Cataracts can frequently develop if the eyes are not protected from UV light exposure. To date, there is no link between an increase in a patient's risk of skin cancer and the proper use of narrow-band UVB phototherapy. "Proper use" is generally defined as reaching the "Sub-Erythemic Dose" (S.E.D.), the maximum amount of UVB your skin can receive without burning.
Side effects:
Certain fungal growths under the toenail can be treated using a specific wavelength of UV delivered from a high-power LED (light-emitting diode) and can be safer than traditional systemic drugs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marsquake**
Marsquake:
A marsquake is a quake which, much like an earthquake, would be a shaking of the surface or interior of the planet Mars as a result of the sudden release of energy in the planet's interior, such as the result of plate tectonics, which most quakes on Earth originate from, or possibly from hotspots such as Olympus Mons or the Tharsis Montes. The detection and analysis of marsquakes could be informative to probing the interior structure of Mars, as well as identifying whether any of Mars's many volcanoes continue to be volcanically active.Quakes have been observed and well-documented on the Moon, and there is evidence of past quakes on Venus. Marsquakes were first detected but not confirmed by the Viking mission in 1976. Marsquakes were detected and confirmed by the InSight mission in 2019. Using InSight data and analysis, the Viking marsquakes were confirmed in 2023. Compelling evidence has been found that Mars has in the past been seismically more active, with clear magnetic striping over a large region of southern Mars. Magnetic striping on Earth is often a sign of a region of particularly thin crust splitting and spreading, forming new land in the slowly separating rifts; a prime example of this being the Mid-Atlantic Ridge. However, no clear spreading ridge has been found in this region, suggesting that another, possibly non-seismic explanation may be needed.
Marsquake:
The 4,000 km (2,500 mi) long canyon system, Valles Marineris, has been suggested to be the remnant of an ancient Martian strike-slip fault. The first confirmed seismic event emanating from Valles Marineris, a quake with a magnitude of 4.2, was detected by InSight on 25 August 2021, proving it to be an active fault.
Detectability:
The first attempts to detect seismic activity on Mars were with the Viking program with two landers, Viking 1 & 2 in 1976, with seismometers mounted on top of the lander. The seismometer on the Viking 1 lander failed. The Viking 2 seismometer collected data for 2100 hours (89 days) of data over 560 sols of lander recorded.Viking 2 recorded two possible marsquakes on Sol 53 (daytime during windy period) and Sol 80 (nighttime during low wind period). Due to the inability to separate ground motion from wind-driven lander vibrations and the lack of other collaborating possible marsquakes, the Sol 53 and Sol 80 events could not be confirmed during the Viking mission. It was possible to rule out frequent and large marsquakes at that time. The low detection rate and evaluation when the windspeed was low at the Viking 2 site, allowed limits to be placed on seismic activity on Mars.
Detectability:
In 2013, data from the InSight mission (see below) led to an increased interest in the Viking data set, and further analysis may reveal one of the largest collection of Mars dust devil detections. In 2023, a re-evaluation of Viking 2 using InSight data and analysis, and Viking wind data, confirmed that the two Viking events on Sol 53 and 80 were marsquakes.The InSight Mars lander, launched in May 2018, landed on Mars on 26 November 2018 and deployed a seismometer called Seismic Experiment for Interior Structure (SEIS) on 19 December 2018 to search for marsquakes and analyze Mars's internal structure. Even if no seismic events are detected, the seismometer is expected to be sensitive enough to detect possibly several dozen meteors causing airbursts in Mars's atmosphere per year, as well as meteorite impacts. It will also investigate how the Martian crust and mantle respond to the effects of meteorite impacts, which gives clues to the planet's inner structure.A faint seismic signal, believed to be a small marsquake, was measured and recorded by the InSight lander on 6 April 2019. The lander's seismometer detected ground vibrations while three distinct kinds of sounds were recorded, according to NASA. Three other events were recorded on 14 March, 10 April, and 11 April, but these signals were even smaller and more ambiguous in origin, making it difficult to determine their cause.On 4 May 2022, a large marsquake, estimated at magnitude 5, was detected by the seismometer on the InSight lander.
Candidate natural seismic events:
Despite the drawbacks of significant wind interference, on Sol 80 of the Viking 2 lander's mission (roughly November 23, 1976), the on-board seismometer detected an unusual acceleration event during a period of relatively low wind speed. Based on the features of the signal and assuming Mars's crust behaves similar to Earth's crust near the lander's testing site in Southern California, the event was estimated to have a Richter magnitude of 2.7 at a distance of roughly 110 kilometers. However, the wind speed was only measured 20 minutes previously, and 45 minutes after, at 2.6 and 3.6 meters per second, respectively. While a sudden wind gust of 16 m/s would have been required to produce the event, it could not be completely ruled out. The Sol 80 event was later identified as a marsquake. The earlier Sol 53 event initially received much interest as a possible marsquake, but was correlated with wind and not considered further. Following the re-evaluation of the Sol 80 event as a marsquake, a re-evaluation of the Sol 53 event showed that it was unique among all Viking daytime recorded waveforms and the only one with a waveform with a rapid rise time and duration similar to Sol 80. Consequently Sol 53 was identified as a marsquake. Comparing Viking with InSight events, using technologies 43 years apart is challenging, but comparison of the two Viking events with some InSight events showed that the similarity of waveforms was reasonable.On Sol 128 of the InSight lander mission the Seismic Experiment for Interior Structure (SEIS) detected one magnitude 1–2 seismic event on April 6, 2019. Three other unconfirmed candidate seismic events were detected on March 14, April 10 and April 11, 2019. The quake is similar to moonquakes detected during the Apollo program. It could have been caused by activity internal to the planet or by a meteorite striking the surface. The epicenter was believed to be within 100 km of the lander. As of 30 September 2019, SEIS had reported 450 events of various types.
Experimental (artificial) seismic events:
NASA's Mars Perseverance Rover will act as a seismic source of known temporal and spatial localization as it lands on the surface. The InSight lander will evaluate whether the signals produced by this event will be detectable by 3,452 km away. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Synthalin**
Synthalin:
Synthalin was an oral anti-diabetic drug. Discovered in 1926 it was marketed in Europe by Schering AG of Berlin as a synthetic drug with insulin-like properties that could be taken orally. However, it was toxic to the liver and kidney and was withdrawn from the market in the early 1940s.
History:
The folk remedy French lilac (Galega officinalis), was used to treat the symptoms of diabetes, and towards the end of the nineteenth century it was discovered to contain guanidine. This had a hypoglycaemic effect but was very toxic to the liver. Karl Slotta at the Chemistry Institute of the University of Vienna synthesized derived compounds that had a polymethylene chain with a guanidine group at each end. These diguanides were less toxic and more potent than guanidine. In 1926, E. Frank, working in Oskar Minkowski's clinic in Wroclaw performed a clinical trial on one of these agents. It was subsequently marketed as Synthalin by Schering AG for treating mild cases of diabetes.
History:
Adverse reports on the toxicity of Synthalin prompted the development of Synthalin B (which had a slightly longer polymethylene chain and was claimed to be safer) and the former product was re-branded Synthalin A. However liver toxicity continued to be a problem, leading to discontinuation in the 1930s, though Synthalin B continued to be used in Germany until the mid-1940s.
History:
Anti-trypanosome After it was discovered that trypanosomes require a plentiful supply of glucose in order to reproduce, researchers tested Synthalin and related compounds to see if they could be effective treatments. Synthalin was effective, at doses lower than would interfere with blood sugar in the patient. Further modifications to the chemical structure led to the diamidine class of drugs, of which pentamidine is still used against trypanosomiasis. Pentamidine is also effective against a range of protozoa such as Pneumocystis jirovecii, which causes pneumocystis pneumonia in AIDS patients. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Phosphatidylinositol-3-phosphatase**
Phosphatidylinositol-3-phosphatase:
The enzyme phosphatidylinositol-3-phosphatase (EC 3.1.3.64) catalyzes the reaction 1-phosphatidyl-1D-myo-inositol 3-phosphate + H2O ⇌ 1-phosphatidyl-1D-myo-inositol + phosphateThis enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is 1-phosphatidyl-1D-myo-inositol-3-phosphate 3-phosphohydrolase. Other names in common use include inositol-1,3-bisphosphate 3-phosphatase, inositol 1,3-bisphosphate phosphatase, inositol-polyphosphate 3-phosphatase, D-myo-inositol-1,3-bisphosphate 3-phosphohydrolase, and phosphatidyl-3-phosphate 3-phosphohydrolase. This enzyme participates in inositol phosphate metabolism and phosphatidylinositol signaling system.
Structural studies:
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1LW3 and 1M7R. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Center vortex**
Center vortex:
Center vortices are line-like topological defects that exist in the vacuum of Yang–Mills theory and QCD. There is evidence in lattice simulations that they play an important role in the confinement of quarks.
Topological description:
Center vortices carry a gauge charge under the center elements of the universal cover of the gauge group G. Equivalently, their topological charge is an element of the fundamental group of this universal cover quotiented by its center.
Topological description:
On a 2-dimensional space M a center vortex at a point x may be constructed as follows. Begin with a trivial G bundle over M. Cut along a circle linking x. Glue the total space back together with a transition function which is a map from the cut circle to a representation of G. The new total space is the gauge bundle of a center vortex.
Topological description:
Now the vortex at x is constructed. Its topological charge can be computed as follows. Lifting this map up to the universal cover of G, each time one circumnavigates the circle, the transition function shifts by some element in the center of the universal cover. This element is the charge.
Center vortices also exist on higher dimensional spaces. They are always codimension two, and the above construction is generalized by cutting along a tube surrounding the vortex.
In SU(N) theories:
In the case of SU(N) gauge theories, the center consists of the constant matrices: zn=e2πinNI, where I is the unit matrix. These elements form the abelian subgroup ZN. Under such center elements, quarks transform as ψ→e2πinNψ, while gluons are invariant. This means that, if quarks are free (like in the deconfined phase), the center symmetry will be broken. Restoration of the center symmetry will imply confinement. 't Hooft first put this on a more rigorous footing.The two phases in the theory can be distinguished based on the behavior of the vortices. When considering a certain Wilson loop, if the vortices are generally long, most vortices will only pierce the surface within the Wilson loop once. Furthermore, the number of vortices piercing this surface will grow in proportion to the area of the surface. Due to the vortices suppressing the value of the vacuum expectation value of the Wilson loop, this will lead to an area-law, i.e. the Wilson loop W(C) behaves like ⟨W(C)⟩∝e−σA, where A is the area spanned by the loop. The constant σ is called the string tension. This behavior is typical of confinement. However, when considering a regime where vortices are generally short — i.e. they form small loops — they will usually pierce the surface of the Wislon loop twice in opposite directions, thus leading to the two contributions canceling. Only vortex loops near the Wilson loop itself will pierce it once, thus leading to a contribution scaling like the perimeter: ⟨W(C)⟩∝e−αL, with L the length of the Wilson loop, and α some constant. This behavior signals there is no confinement.
In SU(N) theories:
In lattice simulations this behavior is indeed seen. At low temperatures (where there is confinement) vortices form large, complex clusters and percolate through space. At higher temperatures (above the deconfinement phase transition) vortices form small loops. Furthermore, it has been seen that the string tension almost drops to zero when center vortices are removed from the simulation. At the other hand, the string tension remains approximately unchanged when removing everything except for the center vortices. This clearly shows the close relation between center vortices and confinement. Aside from this it has also been shown in simulations that the vortices have a finite density in the continuum limit (meaning they are not a lattice artifact, but they do exist in reality), and that they are also linked with chiral symmetry breaking and topological charge.One subtlety concerns the string tension at intermediate range and in the large-N limit. According to the center vortex picture, the string tension should depend on the way the matter fields transform under the center, i.e. their so-called N-ality. This seems to be correct for the large-distance string tension, but at smaller distances the string tension is instead proportional to the quadratic Casimir of the representation — so-called Casimir scaling. This has been explained by domain formation around center vortices. In the large-N limit, this Casimir scaling goes all the way to large distances.
In gauge theories with trivial center:
Consider the gauge group SO(3). It has a trivial center but its fundamental group π1(SO(3)) is Z2. Similarly its universal cover is SU(2) whose center is again Z2. Thus center vortices in this theory are charged under Z2 and so one expects that pairs of vortices can annihilate.
Also G2 gauge theory does not have a long-range string tension, which is consistent with the center vortex picture. In this theory, gluons can screen quarks, leading to color singlet states with the quantum number of quarks. Casimir scaling is, however, still present at intermediate ranges, i.e. before string breaking occurs. This can be explained by domain formation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chassis dynamometer**
Chassis dynamometer:
A chassis dynamometer, informally referred to as a rolling road, is a mechanical device that uses one or more fixed roller assemblies to simulate different road conditions within a controlled environment, and is used for a wide variety of vehicle testing and development purposes.
Chassis dynamometer types:
There are many types of chassis dynamometer according to the target application - for example, emissions measurement, miles accumulation chassis dynamometer (MACD), Noise-Vibration-Harshness (NVH or "Acoustic") Application, Electromagnetic Compatibility (EMC) testing, end of line (EOL) tests, performance measurement and tuning. Another basic division is by type of vehicle - motorcycles, cars, trucks, tractors or the size of the roller - mostly 25", 48", 72", but also any other. Modern dynamometers used for development are mostly one roller to the wheel construction and the vehicle wheel is placed the top of the roller. Older constructional solutions are two roller per wheel and vehicle is place between these rollers - this design solution is cheaper and simpler, however, due to the requirements for accuracy and strict limits is no longer used for the development of new vehicles, but only as a test dynamometer at the end of the line or to measure the performance of the engine without dismantling, or performance tuning in "garage" companies.
Basic modes:
Tractive force control/Force constant - in this mode the dynamometer holds set force regardless of speed or other parameters. The specified Force can be distributed evenly between the axles or in different amounts between different axles in the case of multiple axles chassis dynamometers.
Speed control/Velocity constant - dynamometer holds the set speed regardless of force or other parameters. For example, if a vehicle tries to accelerate in this mode, dynamometer applies opposite force to maintain set constant speed. This mode is used for example in the static power measurement.
Road load simulation - dynamometer simulates road according to set parameters (according to desired simulation parameters = F0, F1, F2 or ABC parameters, simulated inertia and gradient).
Measured variables on a roller dynamometer:
Directly measured variables are only force on the torque transducer (i.e. loadcell) and revolutions measured on the role encoder dynamometer. All other variables are calculated based on known design (i.e. roller radius and loadcell mounting).
Power measurement on Chassis dynamometer Due to friction and mechanical losses in various parts of the power train, the measured power at the wheels is about 15 to 20 percent lower than the power measured directly at the output of engine crankshaft (measuring device with this purpose is called engine testbed).
Road load simulation principle on chassis dynamometer:
Because the vehicle is secured to the chassis dynamometer, it prevents variables such as wind resistance to alter the data set. The chassis dynamometer is designed to add the sum of all the forces that are applied to a vehicle when driven on an actual road course to be simulated through the tires and calculated in the test results.
Increasing air drag with the speed on the road manifests as increasing braking force of the vehicle wheels. The aim is to make the vehicle on the dynamometer accelerate and decelerate the same way as on a real road.
First you need to know the parameters of the "behavior" of the vehicle on a real road.
Road load simulation principle on chassis dynamometer:
In order to get "road parameters", vehicle must be driving on ideal flat road with no wind from any direction, gear set to neutral and time needed to slow down without braking is measured in certain intervals e.g. 100–90 km/h, 90–80 km/h, 80–70 km/h 70–60 km/h etc. Slowing down from higher speed takes shorter time mainly due to air resistance.
Road load simulation principle on chassis dynamometer:
Those parameters are later set in dynamometer workstation, together with vehicle inertia. Vehicle is restrained and so called vehicle adaptation has to be performed.
Road load simulation principle on chassis dynamometer:
During vehicle adaptation dynamometer automatically slowing down from set speed, changing its own "dyno parameters" and trying to get same deceleration in given intervals as on real road. Those parameters are then valid for this vehicle type. Changing of set simulated inertia it is possible to simulate vehicle ability to accelerate if fully loaded, with setting gradient it is possible to simulate force if vehicle going downhill etc. Chassis dynamometers for climatic chamber does exists, where it is possible to change temperature in give range i.e. -40 to +50 °C or altitude chamber where it is possible to check fuel consumption with different temperatures or pressure and to simulate driving on mountain roads. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semicubical parabola**
Semicubical parabola:
In mathematics, a cuspidal cubic or semicubical parabola is an algebraic plane curve that has an implicit equation of the form y2−a2x3=0 (with a ≠ 0) in some Cartesian coordinate system.
Solving for y leads to the explicit form y=±ax32, which imply that every real point satisfies x ≥ 0. The exponent explains the term semicubical parabola. (A parabola can be described by the equation y = ax2.) Solving the implicit equation for x yields a second explicit form x=(ya)23.
The parametric equation x=t2,y=at3 can also be deduced from the implicit equation by putting {\textstyle t={\frac {y}{ax}}.} The semicubical parabolas have a cuspidal singularity; hence the name of cuspidal cubic.
The arc length of the curve was calculated by the English mathematician William Neile and published in 1657 (see section History).
Properties of semicubical parabolas:
Similarity Any semicubical parabola (t2,at3) is similar to the semicubical unit parabola (u2,u3) Proof: The similarity (x,y)→(a2x,a2y) (uniform scaling) maps the semicubical parabola (t2,at3) onto the curve ((at)2,(at)3)=(u2,u3) with u=at Singularity The parametric representation (t2,at3) is regular except at point (0,0) . At point (0,0) the curve has a singularity (cusp). The proof follows from the tangent vector (2t,3t2) . Only for t=0 this vector has zero length.
Properties of semicubical parabolas:
Tangents Differentiating the semicubical unit parabola y=±x3/2 one gets at point (x0,y0) of the upper branch the equation of the tangent: y=x02(3x−x0).
This tangent intersects the lower branch at exactly one further point with coordinates (x04,−y08).
(Proving this statement one should use the fact, that the tangent meets the curve at (x0,y0) twice.) Arclength Determining the arclength of a curve (x(t),y(t)) one has to solve the integral {\textstyle \int {\sqrt {x'(t)^{2}+y'(t)^{2}}}\;dt.} For the semicubical parabola (t2,at3),0≤t≤b, one gets 27 a2(4+9a2t2)32]0b.
(The integral can be solved by the substitution u=4+9a2t2 .) Example: For a = 1 (semicubical unit parabola) and b = 2, which means the length of the arc between the origin and point (4,8), one gets the arc length 9.073.
Properties of semicubical parabolas:
Evolute of the unit parabola The evolute of the parabola (t2,t) is a semicubical parabola shifted by 1/2 along the x-axis: {\textstyle \left({\frac {1}{2}}+t^{2},{\frac {4}{{\sqrt {3}}^{3}}}t^{3}\right).} Polar coordinates In order to get the representation of the semicubical parabola (t2,at3) in polar coordinates, one determines the intersection point of the line y=mx with the curve. For m≠0 there is one point different from the origin: {\textstyle \left({\frac {m^{2}}{a^{2}}},{\frac {m^{3}}{a^{2}}}\right).} This point has distance {\textstyle {\frac {m^{2}}{a^{2}}}{\sqrt {1+m^{2}}}} from the origin. With tan φ and sec tan 2φ ( see List of identities) one gets tan sec φ,−π2<φ<π2.
Properties of semicubical parabolas:
Relation between a semicubical parabola and a cubic function Mapping the semicubical parabola (t2,t3) by the projective map {\textstyle (x,y)\rightarrow \left({\frac {x}{y}},{\frac {1}{y}}\right)} (involutoric perspectivity with axis y=1 and center (0,−1) ) yields {\textstyle \left({\frac {1}{t}},{\frac {1}{t^{3}}}\right),} hence the cubic function y=x3.
The cusp (origin) of the semicubical parabola is exchanged with the point at infinity of the y-axis.
This property can be derived, too, if one represents the semicubical parabola by homogeneous coordinates: In equation (A) the replacement x=x1x3,y=x2x3 (the line at infinity has equation x3=0 .) and the multiplication by x33 is performed. One gets the equation of the curve in homogeneous coordinates: 0.
Choosing line x2=0 as line at infinity and introducing x=x1x2,y=x3x2 yields the (affine) curve y=x3.
Properties of semicubical parabolas:
Isochrone curve An additional defining property of the semicubical parabola is that it is an isochrone curve, meaning that a particle following its path while being pulled down by gravity travels equal vertical intervals in equal time periods. In this way it is related to the tautochrone curve, for which particles at different starting points always take equal time to reach the bottom, and the brachistochrone curve, the curve that minimizes the time it takes for a falling particle to travel from its start to its end.
History:
The semicubical parabola was discovered in 1657 by William Neile who computed its arc length. Although the lengths of some other non-algebraic curves including the logarithmic spiral and cycloid had already been computed (that is, those curves had been rectified), the semicubical parabola was the first algebraic curve (excluding the line and circle) to be rectified. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Collaborative writing**
Collaborative writing:
Collaborative writing, or collabwriting is a method of group work that takes place in the workplace and in the classroom. Researchers expand the idea of collaborative writing beyond groups working together to complete a writing task. Collaboration can be defined as individuals communicating, whether orally or in written form, to plan, draft, and revise a document. The success of collaboration in group work is often incumbent upon a group's agreed upon plan of action. At times, success in collaborative writing is hindered by a group's failure to adequately communicate their desired strategies.
Definition:
Collaborative writing refers to a distributed process of labor involving writing, resulting in the co-authorship of a text by more than one writer.
Interaction between participants throughout the entire writing process. Whether it be brainstorming, writing a draft of the project, or reviewing.
Shared power among participants. Everyone included in the project has the power to make decisions and no group member is in charge of all the text produced.
Definition:
The collaborative production of one single and specific text.Collaborative writing is often the norm, rather than the exception, in many academic and workplace settings. Some theories of collaborative writing suggest that in the writing process, all participants are to have equal responsibilities. In this view, all sections of the text should be split up to ensure the workload is evenly displaced, all participants work together and interact throughout the writing process, everyone contributes to planning, generating ideas, making structure of text, editing, and the revision process. Other theories of collaborative writing propose a more flexible understanding of the workflow that accounts for varying contribution levels depending on the expertise, interest, and role of participants.
History:
In Rhetoric, Composition, and Writing Studies, scholars have demonstrated how collaborative learning in U.S. contexts has been informed by John Dewey’s progressivism in the early twentieth century. Collaboration and collaborative writing gained traction in these fields in the 1980s especially, as researchers reacted to poststructuralist theories related to social constructionism and began theorizing more social views of writing.
Types:
Collaborative writing processes are extremely context-dependent. In scholarship, on both academic and business writing, multiple terminologies have been identified for collaborative writing processes, including: Single-author writing or collegial: one person is leading, they compile the group ideas and do the writing.
Sequential writing: each person adds their task work then passes it on for the next person to edit freely.
Horizontal-division or parallel writing: each person does one part of the whole project and then one member compiles it.
Stratified-division writing: each person plays a role in the composition process of a project due to talents.
Reactive or reciprocal writing: group all works on and writes the project at the same time, adjusting and commenting on everyone's work.
Uses of collaborative writing:
Collaborative writing may be used in instances where a workload would be overwhelming for one person to produce. Therefore, ownership of the text is from the group that produced it and not just one person.
Uses of collaborative writing:
In 2012, Bill Tomlinson and colleagues provided the first extensive discussion of the experiential aspects of large-scale collaborative research by documenting the collaborative development process of an academic paper written by a collective of thirty authors; their work identifies key tools and techniques that would be necessary or useful to the writing process, and to discover, negotiate, and document issues in massively authored scholarship. In 2016, Researchers Joy Robinson, Lisa Dusenberry, and Lawrence M. Halcyon conducted a case study investigating the productivity of a team of writers who utilized the practice of interlaced collaborative writing and found that the team was able to produce a published article, a two-year grant proposal, a digital and physical poster, a midterm research report, and conference presentation over the course of three years. The writers used virtual tools such as Google Hangouts' voice feature for group check-ins, to hold group discussions, and to write as a group. Google Docs was used to allow each team member to edit and add writing to a shared document throughout the writing process.Another motive for using collaborative writing is to increase the quality of the completed project by combining the expertise of multiple individuals and for allowing feedback from diverse perspectives. Collaborative writing has been proven to be an effective method of improving an individual's writing skills, regardless of their proficiency level, by allowing them to collaborate and learn from one or more partners and participate in the co-ownership of a written piece. Instructors may utilize this technique to create more student-centered and collaborative learning environments, or they may use it themselves to cross-collaborate with other academics to produce publishable works.
Views on collaborative writing:
Linguist Neomy Storch, in a 2005 Australian study, discovered that reflections pertaining to collaborative writing in regards to second language learners in the classroom were overwhelmingly positive. The study compared the nature of collaborative writing of individual work versus that of group work, and Storch found that although paired groups wrote shorter texts, their work was more complex and accurate compared to individual works. The study consisted of 23 total participants: 5 doing individual work and 18 working in pairs. The pairs consisted of two male pairs, four female pairs and three male/female pairs. Post-assignment interviews revealed that the majority of students (16) yielded positive opinions about group work, but two students felt that group work is best reserved for oral activities and discussions rather than writing assignments. The majority of interviewees gave positive reviews, but one argued that group work was difficult when it came to criticizing another's work and another argued that there is a power imbalance when writing is based on ability. The two students who were stark opponents of collaborative writing revealed that it was hard to concentrate on their work and they were embarrassed by their supposedly poor English skills.Jason Palmeri found that when it came to inter-professional collaboration, most of the issues stemmed from miscommunication. In differing disciplines, one person may have a level of expertise and understanding that is foreign to another. Palmeri's study provided the example of a nurse and an attorney having different areas of expertise, so therefore they had differing understanding of concepts and even the meaning of the same words. While much of the issues resulted from miscommunication, the study found that some nurse consultants resisted change in terms of altering their writing style to fit the understanding or standards of the attorneys.Obstacles to collaborative work include a writers' inability to find time to meet with the rest of the group, personal preferences for organization and writing process, and a fear of being criticized.
Collaborative writing as an educational tool:
Collaborative writing is a technique used by educators to improve the writing skills of students. This method can assist writers of all ages and levels of proficiency to produce texts of a higher quality with students having a generally positive view of the assignment. Typically, collaborative writing in a classroom setting differs from cooperation or peer-review in that it is defined by the co-authorship of the participants, meaning the students contribute equally at all stages of the writing process to produce the final project. Collaborative writing requires cooperation, which causes more language related episodes through assigning tasks, comparing ideas, and revising the text. It typically results in more accurate language usage, and it can even improve oral fluency and confidence in speaking in the target language. Students also feel higher levels of motivation to complete the task due to the group interaction. Scholarly research featuring the practice of collaborative writing in educational settings began in the early 1900s with a focus specifically on language acquisition. Researchers found that the language exchanges used by participants to generate the texts were beneficial, and they called these language related episodes. This is because learners could socialize in their first language or the language they were learning while deliberating ideas, justifying linguistic choices, and negotiating meaning, allowing for students to learn from each other and forcing them to analyze choices. While worksheets seem to focus on linguistical structures, such as conjugation, collaborative writing focuses on the lexis. The co-creative nature of this knowledge allows for it to be maintained. Even if parts of the conversing is in the students' primary language during these group interactions, evidence shows that the actions are transactional and brief with a focus on bettering the target language.The grouping of the students is significant. Larger groups seem to produce better texts and have more language related episodes. Students in groups of low proficiency will have fewer language related episodes and focus on meaning, while students with higher proficiency will have more language related episodes and focus on grammar. It is still beneficial to pair students of high proficiency with students of low proficiency because they will have more language related episodes. However, it is most effective to pair or group students with similar proficiency levels because they will engage more with each other and with the collaborative writing task. Teacher-chosen groups and student-chosen groups seem to result in the same collaborative patterns, but teacher chosen groups will have more language related episodes and result in a higher quality of final text. Following that, students collaborate the most in face-to-face settings opposed to using to online formats with collaborative tools. Surprisingly, one study shows that even silent group members who do not contribute to the discussion or the task completion still benefit from collaborative writing through observing the exchanges from their peers. Some students may prefer individual writing since the process is more linear and less time-consuming with fewer opportunities for distractions. A student's view on collaborative writing may also be shaped by their experiences. If there are more transactional experiences, such as adding or deleting text with no group deliberation, students will likely prefer to work alone. Likewise, the final text will typically lack synthesis because there was not a collaborative relationship within the group. Generally, higher amounts of group discussion results in a more positive view on the collaborative writing assignment. Students may still chose not to collaborate or contribute to the task due to a lack of confidence in their language skills and may dislike collaborative writing as a result.
Collaborative writing in the workplace:
A study conducted by Stephen Bremner, an English professor at the City University of Hong Kong, investigated eight business communication textbooks to test the depth in which they provided students with a knowledge of collaborative writing in the workplace and how to execute those processes. The study found that, generally, textbooks highlighted the role of collaborative writing in the workplace. Textbooks listed the pros of collaborative writing such as saving time, more superior documents due to each individual's strengths and specialized knowledge, a well-crafted message due to team work, balanced abilities, and an interest in accomplishing a common goal.The article claimed that the textbooks examined gave students a basic knowledge of collaboration in the workplace, but they also lacked the information that showed students the realities of collaborative writing in the workplace with few activities presented in the textbooks that mirror collaborative activities in the workplace. Much of the activities that featured group work seemed more idealistic rather than based in reality, where the writing process occurred in only controlled and orderly environments. Bremner also found that group work in the classroom also did not properly simulate the power hierarchies present in the workplace.Jason Palmeri found that when it came to inter-professional collaboration, most of the issues stemmed from miscommunication. In differing disciplines, one person may have a level of expertise and understanding that is foreign to another. The article gave the example of a nurse and an attorney having different areas of expertise, so therefore they had differing understanding of concepts and even the meaning of the same words. While much of the issues resulted from miscommunication, the article claimed that some nurse consultants resisted change in terms of altering their writing style to fit the understanding or standards of the attorneys.
Collaborative writing in the workplace:
Tools Atlas is a wiki-like git-managed authoring platform from O'Reilly Media that is based on the open-source web-based Git repository manager (version control system) "GitLab".
For collaborative code-writing, mostly revision control systems like Team Foundation Version Control (used in Team Foundation Server) and Git (used in GitHub, Bitbucket, GitLab and CodePlex) are used in parallel writing.
Collaborative real-time editors like Etherpad, Hackpad, Google Docs, Microsoft Office, and Authorea.
Online platforms mainly focused on collaborative fiction that allow other users to continue a story's narrative such as Protagonize and Ficly.
Wikis like Wikipedia and Wikia.
Authorship:
An author acquires copyright if their work meets certain criteria. In the case of works created by one person, typically, the first owner of a copyright in that work is the person who created the work, i.e. the author. But, when more than one person creates the work in collaboration with one another, then a case of joint authorship can be made provided some criteria are met. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JPL Small-Body Database**
JPL Small-Body Database:
The JPL Small-Body Database (SBDB) is an astronomy database about small Solar System bodies. It is maintained by Jet Propulsion Laboratory (JPL) and NASA and provides data for all known asteroids and several comets, including orbital parameters and diagrams, physical diagrams, close approach details, radar astrometry, discovery circumstances, alternate designations and lists of publications related to the small body. The database is updated daily when new observations are available. In April 2021 the JPL Small-Body Database started using planetary ephemeris (DE441) and small-body perturber SB441-N16. Most objects such as asteroids get a two-body solution (Sun+object) recomputed twice a year. Comets generally have their two-body orbits computed at a time near the perihelion passage (closest approach to the Sun) as to have the two-body orbit more reasonably accurate for both before and after perihelion. For most asteroids, the epoch used to define an orbit is updated twice a year. Orbital uncertainties in the JPL Small-Body Database are listed at the 1-sigma level. On 27 September 2021 the JPL Solar System Dynamics website underwent a significant upgrade.
JPL Small-Body Database:
233000 orbits were computed in August 2021 and in the last 12 months more than 3.3 million orbits have been computed.
Close-approach data:
As of August 2013 (planetary ephemeris DE431) close-approach data is available for the major planets and the 16 most massive asteroids. Close approach data is available by adding &view=OPC to the query string at the end of the body's URL. Close approach data used to be available by adding ;cad=1 or &cad=1 to the query string. The Wayback Machine prefers the &cad=1 option. The JPL Small-Body Database close approach table lists a linearized uncertainty. The time of close approach uncertainty and min/max distance correspond to the 3-sigma level.
Orbit viewer:
In the past, one could view a 3D visualization of the body's orbit using a Java applet. As of mid-2023, one could see something similar using JPL's Orbit Viewer tool, which was implemented using JavaScript, Three.js and WebGL.
The visualized orbits use unreliable 2-body methods, and hence should not be used for accurately determining the time of perihelion passage or planetary encounter circumstances. For accurate ephemerides use the JPL Horizons On-Line Ephemeris System that handles the n-body problem using numerical integration. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lesser palatine canals**
Lesser palatine canals:
The lesser palatine canals (also accessory palatine canals) are passages in the palatine bone that carry the lesser and middle palatine nerves and vessels.
Structure:
The lesser palatine canals start from the greater palatine canal, and run with them, also opening into the roof of the oral cavity. Their openings are known as the lesser palatine foramina, and they transmit the lesser palatine artery, vein, and nerve, as well as the middle palatine vessels and nerve. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MOI (file format)**
MOI (file format):
MOI is a computer file format used primarily to represent information. MOI files are associated with MOD or TOD files whose content they represent. They are mainly used on JVC and Canon camcorders.
== Format overview == | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CKMT2**
CKMT2:
Creatine kinase S-type, mitochondrial is an enzyme that in humans is encoded by the CKMT2 gene.Mitochondrial creatine kinase (MtCK) is responsible for the transfer of high energy phosphate from mitochondria to the cytosolic carrier, creatine. The "energy-rich" gamma-phosphate group of ATP that is generated by oxidative phosphorylation inside mitochondria is trans-phosphorylated to creatine (Cr) to give phospho-creatine (PCr), which then is exported from the mitochondria into the cytosol, where it is made available to cytosolic creatine kinases (CK) for in situ regeneration of the ATP that has been used for cellular work. Cr then is returning to the mitochondria where it stimulates mitochondrial respiration and again is charged-up by mitochondrial ATP via MtCK. This process is termed the PCr/Cr-shuttle or circuit. MtCK belongs to the creatine kinase (CK) isoenzyme family. It exists as two isoenzymes, sarcomeric MtCK and ubiquitous MtCK, encoded by separate genes. Mitochondrial creatine kinase occurs in two different oligomeric forms: dimers and octamers, in contrast to the exclusively dimeric cytosolic creatine kinase isoenzymes. Sarcomeric mitochondrial creatine kinase has 80% homology with the coding exons of ubiquitous mitochondrial creatine kinase. This gene contains sequences homologous to several motifs that are shared among some nuclear genes encoding mitochondrial proteins and thus may be essential for the coordinated activation of these genes during mitochondrial biogenesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Structured support vector machine**
Structured support vector machine:
The structured support-vector machine is a machine learning algorithm that generalizes the Support-Vector Machine (SVM) classifier. Whereas the SVM classifier supports binary classification, multiclass classification and regression, the structured SVM allows training of a classifier for general structured output labels.
Structured support vector machine:
As an example, a sample instance might be a natural language sentence, and the output label is an annotated parse tree. Training a classifier consists of showing pairs of correct sample and output label pairs. After training, the structured SVM model allows one to predict for new sample instances the corresponding output label; that is, given a natural language sentence, the classifier can produce the most likely parse tree.
Training:
For a set of n training instances (xi,yi)∈X×Y , i=1,…,n from a sample space X and label space Y , the structured SVM minimizes the following regularized risk function.
Training:
min max y∈Y(0,Δ(yi,y)+⟨w,Ψ(xi,y)⟩−⟨w,Ψ(xi,yi)⟩) The function is convex in w because the maximum of a set of affine functions is convex. The function Δ:Y×Y→R+ measures a distance in label space and is an arbitrary function (not necessarily a metric) satisfying Δ(y,z)≥0 and Δ(y,y)=0∀y,z∈Y . The function Ψ:X×Y→Rd is a feature function, extracting some feature vector from a given sample and label. The design of this function depends very much on the application.
Training:
Because the regularized risk function above is non-differentiable, it is often reformulated in terms of a quadratic program by introducing one slack variable ξi for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows.
min s.t.
⟨w,Ψ(xi,yi)⟩−⟨w,Ψ(xi,y)⟩+ξi≥Δ(yi,y),i=1,…,n,∀y∈Y
Inference:
At test time, only a sample x∈X is known, and a prediction function f:X→Y maps it to a predicted label from the label space Y . For structured SVMs, given the vector w obtained from training, the prediction function is the following.
argmax y∈Y⟨w,Ψ(x,y)⟩ Therefore, the maximizer over the label space is the predicted label. Solving for this maximizer is the so-called inference problem and similar to making a maximum a-posteriori (MAP) prediction in probabilistic models. Depending on the structure of the function Ψ , solving for the maximizer can be a hard problem.
Separation:
The above quadratic program involves a very large, possibly infinite number of linear inequality constraints. In general, the number of inequalities is too large to be optimized over explicitly. Instead the problem is solved by using delayed constraint generation where only a finite and small subset of the constraints is used. Optimizing over a subset of the constraints enlarges the feasible set and will yield a solution that provides a lower bound on the objective. To test whether the solution w violates constraints of the complete set inequalities, a separation problem needs to be solved. As the inequalities decompose over the samples, for each sample (xi,yi) the following problem needs to be solved.
Separation:
argmax y∈Y(Δ(yi,y)+⟨w,Ψ(xi,y)⟩−⟨w,Ψ(xi,yi)⟩−ξi) The right hand side objective to be maximized is composed of the constant −⟨w,Ψ(xi,yi)⟩−ξi and a term dependent on the variables optimized over, namely Δ(yi,y)+⟨w,Ψ(xi,y)⟩ . If the achieved right hand side objective is smaller or equal to zero, no violated constraints for this sample exist. If it is strictly larger than zero, the most violated constraint with respect to this sample has been identified. The problem is enlarged by this constraint and resolved. The process continues until no violated inequalities can be identified.
Separation:
If the constants are dropped from the above problem, we obtain the following problem to be solved.
Separation:
argmax y∈Y(Δ(yi,y)+⟨w,Ψ(xi,y)⟩) This problem looks very similar to the inference problem. The only difference is the addition of the term Δ(yi,y) . Most often, it is chosen such that it has a natural decomposition in label space. In that case, the influence of Δ can be encoded into the inference problem and solving for the most violating constraint is equivalent to solving the inference problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glucosamine kinase**
Glucosamine kinase:
In enzymology, a glucosamine kinase (EC 2.7.1.8) is an enzyme that catalyzes the chemical reaction ATP + D-glucosamine ⇌ ADP + D-glucosamine phosphateThus, the two substrates of this enzyme are ATP and D-glucosamine, whereas its two products are ADP and D-glucosamine phosphate.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:D-glucosamine phosphotransferase. Other names in common use include glucosamine kinase (phosphorylating), ATP:2-amino-2-deoxy-D-glucose-6-phosphotransferase, and aminodeoxyglucose kinase. This enzyme participates in aminosugars metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coloratura soprano**
Coloratura soprano:
A coloratura soprano is a type of operatic soprano voice that specializes in music that is distinguished by agile runs, leaps and trills.
Coloratura soprano:
The term coloratura refers to the elaborate ornamentation of a melody, which is a typical component of the music written for this voice. Within the coloratura category, there are roles written specifically for lighter voices known as lyric coloraturas and others for larger voices known as dramatic coloraturas. Categories within a certain vocal range are determined by the size, weight and color of the voice. Coloratura is particularly found in vocal music and especially in operatic singing of the 18th and 19th centuries. The word coloratura (UK: COL-ə-rə-TURE-ə, US: CUL-, Italian: [koloraˈtuːra]) means "coloring" in Italian, and derives from the Latin word colorare ("to color").
Lyric coloratura soprano:
A very agile light voice with a high upper extension, capable of fast vocal coloratura. Lyric coloraturas have a range of approximately middle C (C4) to "high F" (F6). Such a soprano is sometimes referred to as a soprano leggero if her vocal timbre has a slightly warmer quality. The soprano leggero also typically does not go as high as other coloraturas, peaking at a "high E" (E6). Bel canto roles were typically written for this voice, and a wide variety of other composers have also written coloratura parts. Baroque music, early music and baroque opera also have many roles for this voice.
Dramatic coloratura soprano:
A coloratura soprano with great flexibility in high-lying velocity passages, yet with great sustaining power comparable to that of a full spinto or dramatic soprano. Dramatic coloraturas have a range of approximately "low A" (A3) to "high F" (F6). Various dramatic coloratura roles have different vocal demands for the singer – for instance, the voice that can sing Abigail (Nabucco, Verdi) is unlikely to also sing Lucia (Lucia di Lammermoor, Donizetti), but a factor in common is that the voice must be able to convey dramatic intensity as well as flexibility. Roles written specifically for this kind of voice include the more dramatic Mozart and bel canto female roles and early Verdi. This is a rare vocal fach, as thick vocal cords are needed to produce the large, dramatic notes, which usually lessens the flexibility and acrobatic abilities of the voice.
Soprano acuto sfogato:
In rare instances, some coloratura sopranos are able to sing in altissimo above high F (F6). This type of singer is sometimes referred to as a soprano acuto sfogato.Although both lyric and dramatic coloraturas can be acuto sfogato sopranos, the primary attribute of the acuto sfogato soprano is an upper extension above F6. Some pedagogues refer to these extreme high notes as the whistle register. Very few composers have ever written operatic roles for this voice type with actual notes scored above high F, so these singers typically display these extreme high notes through the use of interpolation in some of the operatic roles already cited above or in concert works. Examples of works that include G6 are the concert aria "Popoli di Tessaglia!"" by Mozart, Esclarmonde by Massenet, and Postcard from Morocco by Dominick Argento. Thomas Adès composed a high A (A6) for the character of Leticia Meynar in The Exterminating Angel.
Soprano acuto sfogato:
The soprano acuto sfogato is sometimes confused with the soprano sfogato, a singer (often mezzo-soprano) capable, by sheer industry or natural talent, of extending her upper range to encompass some of the coloratura soprano tessitura, though not the highest range above high F. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**/dev/full**
/dev/full:
In Linux, FreeBSD, NetBSD /dev/full, or the always-full device, is a special file that always returns the error code ENOSPC (meaning "No space left on device") on writing, and provides an infinite number of zero bytes to any process that reads from it (similar to /dev/zero). This device is usually used when testing the behaviour of a program when it encounters a "disk full" error.
History:
Support for the always-full device in Linux is documented as early as 2007. Native support was added to FreeBSD in the 11.0 release in 2016, which had previously supported it through an optional module called lindev. The full device appeared in NetBSD 8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Lesbian Body**
The Lesbian Body:
The Lesbian Body (French: Le Corps Lesbien) is a 1973 novel by Monique Wittig. It was translated into English in 1975.
Plot:
According to Wittig's The New York Times obituary, "lesbian lovers literally invade each other's bodies as an act of love."
Literary significance and criticism:
Wittig said "When I came upon the title, Corps Lesbien, the association of these two words made me laugh. It was absurdly sarcastic." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Motorola Charm**
Motorola Charm:
The Motorola Charm is a smartphone manufactured by Motorola. It was released exclusively to U.S. carrier T-Mobile, and Canadian carrier Telus. The Motorola Charm is the second Motorola Android phone to feature the updated Motoblur UI for Android 2.1.
The Charm's key features are its front-facing QWERTY keyboard, 2.8-inch 320 x 240 touchscreen, 3-megapixel camera with digital zoom, touchpad on rear of phone, and Android HTML WebKit/Flash Lite web browser. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Competitive programming**
Competitive programming:
Competitive programming (also known as sports programming) is a mind sport usually held over the Internet or a local network, involving participants trying to program according to provided specifications. Contestants are referred to as sport programmers. Competitive programming is recognized and supported by several multinational software and Internet companies, such as Google and Facebook.A programming competition generally involves the host presenting a set of logical or mathematical problems, also known as puzzles or challenges, to the contestants (who can vary in number from tens or even hundreds to several thousands). Contestants are required to write computer programs capable of solving these problems. Judging is based mostly upon number of problems solved and time spent for writing successful solutions, but may also include other factors (quality of output produced, execution time, memory usage, program size, etc.).
History:
One of the oldest contests known is the International Collegiate Programming Contest (ICPC) which originated in the 1970s, and has grown to include 88 countries in its 2011 edition.
History:
From 1990 to 1994, Owen Astrachan, Vivek Khera and David Kotz ran one of the first distributed, internet-based programming contests inspired by the ICPC.Interest in competitive programming has grown extensively since 2000 to tens of thousands of participants (see Notable competitions), and is strongly connected to the growth of the Internet, which facilitates holding international contests online, eliminating geographical problems.
Overview:
The aim of competitive programming is to write source code of computer programs which are able to solve given problems. A vast majority of problems appearing in programming contests are mathematical or logical in nature. Typical such tasks belong to one of the following categories: combinatorics, number theory, graph theory, algorithmic game theory, computational geometry, string analysis and data structures. Problems related to constraint programming and artificial intelligence are also popular in certain competitions.
Overview:
Irrespective of the problem category, the process of solving a problem can be divided into two broad steps: constructing an efficient algorithm, and implementing the algorithm in a suitable programming language (the set of programming languages allowed varies from contest to contest). These are the two most commonly tested skills in programming competitions.
Overview:
In most contests, the judging is done automatically by host machines, commonly known as judges. Every solution submitted by a contestant is run on the judge against a set of (usually secret) test cases. Normally, contest problems have an all-or-none marking system, meaning that a solution is "Accepted" only if it produces satisfactory results on all test cases run by the judge, and rejected otherwise. However, some contest problems may allow for partial scoring, depending on the number of test cases passed, the quality of the results, or some other specified criteria. Some other contests only require that the contestant submit the output corresponding to given input data, in which case the judge only has to analyze the submitted output data.
Overview:
Online judges are online environments in which testing takes place. Online judges have rank lists showing users with the biggest number of accepted solutions and/or shortest execution time for a particular problem.
Notable competitions:
Algorithm competitions In most of the above competitions, competitions are usually organized in several rounds. They usually start with online rounds, which conclude in the onsite final round. The top performers at IOI and ICPC receive gold, silver and bronze medals. In the other contests, cash prizes are awarded to the top finishers. The competitions also attract interest of recruiters from multiple software and Internet companies, which often reach out to competitors with potential job offers.
Notable competitions:
Artificial intelligence and machine learning Kaggle – data science and machine learning competitions.
CodeCup – board game AI competition held annually since 2003. Game rules get published in September and the final tournament is held in January.
Google AI Challenge – bi-annual competitions for students that ran 2009 to 2011.
Halite – An AI programming challenge sponsored by Two Sigma, Cornell Tech, and Google.
Russian AI Cup – open artificial intelligence programming contest.
CodinGame – hosts seasonal bot programming competitions.
Contests focusing on open source technologies List may be incomplete
Online platforms:
The programming community around the world has created and maintained several internet-resources dedicated to competitive programming. They offer standalone contests with or without minor prizes. Also the past archives of problems are a popular resource for training in competitive programming. There are several organizations who host programming competitions on a regular basis. These include:
Benefits and criticism:
Participation in programming contests may increase student enthusiasm for computer science studies. The skills acquired in ICPC-like programming contests also improve career prospects, as they help to pass the "technical interviews", which often require candidates to solve complex programming and algorithmic problems on the spot.There has also been criticism of competitive programming, particularly from professional software developers. One critical point is that many fast-paced programming contests teach competitors bad programming habits and code style (like unnecessary use of macros, lack of OOP abstraction and comments, use of short variable names, etc.). Also, by offering only small algorithmic puzzles with relatively short solutions, programming contests like ICPC and IOI don't necessarily teach good software engineering skills and practices, as real software projects typically have many thousands of lines of code and are developed by large teams over long periods of time. Peter Norvig stated that based on the available data, being a winner of programming contests correlated negatively with a programmer's performance at their job at Google (even though contest winners had higher chances of getting hired). Norvig later stated that this correlation was observed on a small data set, but that it could not be confirmed after examining a larger data set Yet another sentiment is that rather than "wasting" their time on excessive competing by solving problems with known solutions, high-profile programmers should rather invest their time in solving real-world problems.
Literature:
Halim, S., Halim, F. (2013). Competitive Programming 3: The New Lower Bound of Programming Contests. Lulu.
Laaksonen, A. (2017). Guide to Competitive Programming (Undergraduate Topics in Computer Science). Cham: Springer International Publishing.
Kostka, B. (2021). Sports programming in practice. University of Wrocław. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flour extraction**
Flour extraction:
Flour extraction is the common process of refining Whole Grain Flour first milled from grain or grist by running it through sifting devices, often called flour dressers.
Definition:
For centuries, much of the flour milled for human consumption has been run through some kind of “bolting”, sifting or “extraction” process. This flour is extracted from whole grains for one of two reasons; firstly, to decrease the tendency for rancidity. The milling systems with a lower extraction percentage discard most of the rancidity prone nutritional minerals and oils associated with the bran and germ elements, of the wheat kernel. Baking functionality is the other issue, with increased loaf volume accomplished by simply removing just the larger flour particles. Like the lower extraction white flour, higher extraction flour still creates a smoother dough more inclined to hold the gas created during fermentation. However, higher extraction flour also retains the sensory flavors and nutrition associated with the smaller bran and germ elements that are also extracted along with the endosperm.
History:
“White flour”, extracted from whole grains by Roller mills that eliminates the rancidity prone bran and germ elements of the wheat kernel was introduced in the late 19th century. By first hydrating the outer wheat kernel bran and germ elements to keep them intact, this new system then employed steel rollers instead of circulating stones to repetitively fracture the remaining starchy endosperm element into fine particles. The extracted endosperm flour came to be known as “white flour” as this element of the wheat kernel is white. This system ingeniously accomplished the extraction of most of the starchy endosperm while separating out virtually all of the bran and germ elements, extracting about 72% of the whole grain kernel. Roller milling eventually came (and continues) to dominate the world’s flour production. Well over 90% of U.S. flour production in 2017 was roller milled white enriched flour.
History:
Once roller milling made white flour affordable for almost everyone, public health issues arose. As scientists learned more about the crucial health contributions of the bran and the germ, artificial enrichment of white flour was introduced that restores a small part of the nutrition lost by eliminating all the bran and germ elements.
Benefits:
There are generally two benefits from extraction: The highest extraction typically retains over 85% of the original volume by focusing on just improving functionality (increased loaf volume) without a significant sensory loss. A fine Whole Grain Flour is used to just remove the larger flour particles and is commonly referred to by Artisan bakers as High Extraction flour. It is primarily used to produce a variety of non-whole grain bread products.
Benefits:
More aggressive sifting focuses on both functionality and reduction of the potential for rancidity by eliminating 100% of the bran and germ in the kernel. This lower extraction (typically 72% or less) is most commonly Roller Milled White Enriched flour yielding a significant selection of flour types used for a variety of baking applications, the most significant of which is white bread.
Higher extraction flour:
The general availability of refrigeration and even flour itself, has diminished the significance of the rancidity/shelf life/keeping quality issue of whole grain flour. Higher extraction rates just focus on the elimination of the larger flour particles to increase loaf volume while retaining the majority of the nutritional bran and germ elements along with the endosperm. This is accomplished by direct extraction of the fine whole grain flour output of impact or attrition mills. Millers have been able to match the finer particle size distribution of low extraction roller milled white flour (72% extraction) that eliminated all the bran and germ elements with a higher +88% extraction that retains most of them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rambutan (cryptography)**
Rambutan (cryptography):
Rambutan is a family of encryption technologies designed by the Communications-Electronics Security Group (CESG), the technical division of the United Kingdom government's secret communications agency, GCHQ.
Rambutan (cryptography):
It includes a range of encryption products designed by CESG for use in handling confidential (not secret) communications between parts of the British government, government agencies, and related bodies such as NHS Trusts. Unlike CESG's Red Pike system, Rambutan is not available as software: it is distributed only as a self-contained electronic device (an ASIC) which implements the entire cryptosystem and handles the related key distribution and storage tasks. Rambutan is not sold outside the government sector.Technical details of the Rambutan algorithm are secret. Security researcher Bruce Schneier describes it as being a stream cipher (linear-feedback shift register) based cryptosystem with 5 shift registers each of around 80 bits, and a key size of 112 bits. RAMBUTAN-I communications chips (which implement a secure X.25 based communications system) are made by approved contractors Racal and Baltimore Technologies/Zergo Ltd. CESG later specified RAMBUTAN-II, an enhanced system with backward compatibility with existing RAMBUTAN-I infrastructure. The RAMBUTAN-II chip is a 64-pin quad ceramic pack chip, which implements the electronic codebook, cipher block chaining, and output feedback operating modes (each in 64 bits) and the cipher feedback mode in 1 or 8 bits. Schneier suggests that these modes may indicate Rambutan is a block cipher rather than a stream. The three 64 bit modes operate at 88 megabits/second. Rambutan operates in three modes: ECB, CBC, and 8 bit CFB. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Emergency ascent**
Emergency ascent:
An emergency ascent is an ascent to the surface by a diver in an emergency. More specifically, it refers to any of several procedures for reaching the surface in the event of an out-of-air emergency, generally while scuba diving.
Emergency ascent:
Emergency ascents may be broadly categorised as independent ascents, where the diver is alone and manages the ascent by him/herself, and dependent ascents, where the diver is assisted by another diver, who generally provides breathing gas, but may also provide transportation or other assistance. The extreme case of a dependent ascent is underwater rescue or recovery of an unconscious or unresponsive diver, but this is more usually referred to as diver rescue, and emergency ascent is usually used for cases where the distressed diver is at least partially able to contribute to the management of the ascent.
Emergency ascent:
An emergency ascent usually implies that the diver initiated the ascent voluntarily, and made the choice of the procedure. Ascents that are involuntary or get out of control unintentionally are more accurately classed as accidents.
An emergency ascent may be made for any one of several reasons, including failure or imminent failure of the breathing gas supply.
Reasons for making an emergency ascent:
An emergency ascent implies that the dive plan has been abandoned due to circumstances beyond the control of the diver, though they may have been caused by the diver, as is often the case in out-of gas emergencies in scuba diving. Out of gas emergencies are generally the most urgent contingencies in diving, as the available time to deal with the emergency can be measured in minutes or seconds, while most other non-traumatic emergencies allow more time.
Reasons for making an emergency ascent:
Other reasons for emergency ascent may include: Failure of a rebreather requiring bailout to open circuit – This is not always considered an emergency ascent, though it is usually urgent, and is considered a sufficient reason to abort the dive.
Compromise of diver buoyancy control due to loss of ballast weight.
Tethered-ascent – where the diver has unintentionally lost full control of buoyancy due to a loss of ballast weight, and controls ascent rate by use of a ratchet dive reel with the end of the reel line secured to the bottom.
Lost buoyancy ascent – where the diver loses the ability to establish neutral or positive buoyancy without resorting to ditching weights. This can be due to a major buoyancy compensator failure or a major dry-suit flood.
Injury or illness.
Scuba equipment failure leading to non-catastrophic but rapid loss of breathing gas.
Sudden loss of thermal protection due to dry suit leak or loss of heating water supply in a hot-water suit.
Reasons for making an emergency ascent:
Inability to read instruments due to damage or loss of mask or severe damage to helmet faceplate. It may not be possible to accurately monitor depth, rate of ascent or decompression stops. This can be mitigated if a dive buddy can monitor control the ascent, or if the diver's computer has audible alarms for fast ascent and exceeding a ceiling. Ascent on a tangible reference such as a DSMB line, shotline or anchor line is also helpful.
Reasons for making an emergency ascent:
Flooding of helmet or full-face mask that cannot be rectified.
Entanglement requiring abandonment of breathing apparatus.
Entrapment of bell or failure of bell recovery system (SSDE).
Entrapment of umbilical, or damage to umbilical resulting in main gas supply failure (SSDE).
Terminology for emergency ascents:
Independent action (no assistance required from another diver) Bailout ascent is where the diver makes use of a bailout set carried by him/herself to provide an emergency breathing gas supply for this kind of emergency.
Blow and go is a free ascent where the diver exhales at the bottom before starting the ascent. The breath may be held during part of the ascent, as the lungs are emptied before starting. This procedure is considered unnecessarily hazardous by many recreational training agencies.
Buoyant ascent is an ascent where the diver is propelled towards the surface by positive buoyancy.
Controlled emergency swimming ascent (CESA) is an emergency swimming ascent which remains under control and which is performed at a safe ascent rate, with continuous exhalation at a rate unlikely to cause injury to the diver by lung overexpansion.
Emergency swimming ascent (ESA) is a free ascent where the diver propels him/herself to the surface by swimming at either negative or approximately neutral buoyancy.
Exhaling ascent is an ascent where the diver continuously exhales at a controlled rate during the ascent. This may apply to an emergency swimming ascent/free ascent or a controlled emengency swimming ascent, and distinguishes it from a blow and go procedure.
Free ascent is the procedure used in US Navy submarine escape training. However the term is also used for other emergency diver ascent procedures where breathing gas is not available to the diver during the ascent.
Terminology for emergency ascents:
Open circuit bailout is a change from breathing off the rebreather loop to open circuit, either by physically changing from the rebreather dive/surface valve to an open circuit bailout demand valve, or by switching the rebreather bailout valve from closed to open circuit. This action is taken both when there is a recoverable problem with the rebreather loop, in which case once the problem has been corrected, a reversion to closed circuit is usual, or when the loop has failed irrecoverably, in which case an ascent is made on open circuit, which is generally regarded as an emergency ascent.
Terminology for emergency ascents:
Reserve air ascent is an ascent using a bailout cylinder or the gas in the main cylinder after actuating a reserve valve to release the gas trapped by the reserve valve mechanism. A reserve air ascent is not traditionally considered an emergency ascent, as it was the standard procedure before the use of submersible pressure gauges became widespread.
Dependent action (assistance provided by another diver) Buddy breathing ascent is where the diver is provided with breathing gas during the ascent from the same demand valve (second stage regulator) as the donor, and they breathe alternately.
Terminology for emergency ascents:
Octopus assisted ascent, sometimes just assisted ascent is where the diver is provided with breathing gas during the ascent by another diver via a demand valve other than the one in use by the donor during the ascent. This may be supplied from the same or a different cylinder, and from the same or a separate 1st stage regulator. The divers' breathing is not constrained by each other, and they may breathe simultaneously.
Training policies of various certification agencies:
Few issues of diver training have been more controversial than the teaching of emergency ascent procedures. The controversy centers on techniques, psychological and physiological considerations, concern about today's legal climate, and finally the moral issue: is it wise and ethical to train divers in emergency ascent techniques, even though this training may itself be hazardous? Ronald C. Samson & James W. Miller, 1977 Emergency ascent training policy differs considerably among the certification agencies, and has been the subject of some controversy regarding risk-benefit.
Training policies of various certification agencies:
NSTC agreement In 1977 a formal policy regarding training of emergency ascent procedures was adopted by five major American recreational diver certification agencies: NASDS, NAUI, PADI, SSI and YMCA.This policy is a general agreement that emergency ascent training is worth the risk on ethical grounds, and recommends those procedures which the agencies consider most appropriate for teaching recreational divers. It does not prescribe training procedures or standards.
Training policies of various certification agencies:
This National Scuba Training Committee Ascent Training Agreement recognises that there are a number of options available to the scuba diver in the event of a sudden apparent termination of breathing gas supply at depth, and that the selection of an acceptable response is dependent on several variables, including: depth, visibility, distance from other divers, the nature of the underwater activity, available breath-hold time, training and current competence of the involved divers, stress levels of the divers, obstructions to a direct access to the surface, water movement, equipment, buoyancy, familiarity between divers of procedures and equipment, apparent reasons for air loss and decompression obligations.
Training policies of various certification agencies:
Recommendations for training: The agreement requires scuba instructors to make students aware of the variables and how they affect the choice of an appropriate response.
Training should allow divers trained by different instructors to make similar appropriate decisions under the same circumstances, and should provide divers with safe and effective emergency procedures for an out of air situations when not under supervision of an instructor.
Divers should be taught to agree on emergency procedures before the dive when intending to dive together.Recommendations for choice of procedure: The most desirable option in the dependent category is given as the octopus assisted ascent, where the out-of-air diver is provided breathing gas by a donor via a secondary (octopus) second stage.
Buddy breathing by two divers on a single second stage is specified as the least desirable of the dependent options.
The recommended independent option is the emergency swimming ascent, where the diver swims to the surface at roughly neutral buoyancy, while exhaling continuously.
Training policies of various certification agencies:
The final option is a buoyant ascent, where buoyancy is gained by inflation of the buoyancy compensator (not always possible in an out-of-air emergency), and dropping of weights. This is recommended as a last resort where the diver is unsure of making it to the surface by swimming, as it will ensure that an unconscious diver will rise to the surface rather than sink.No other procedures are recommended in this agreement, though the use of a bailout cylinder may be considered effectively equivalent to either octopus assisted ascent, when gas is supplied by a donor, or not actually running out of gas if it is the diver's own bailout set.
Training policies of various certification agencies:
SSAC The Scottish Sub-Aqua Club holds that training is primarily to deal with potential emergencies and that it should be practical rather than purely theoretical. This implies that it is better to have some practical experience of ability to cope with a simulated emergency situation as this gives greater insight and confidence, as well as proven ability, provided that the risk in training is appreciably smaller than the risk in not being trained.
Training policies of various certification agencies:
The SSAC trains open water free ascent from a maximum depth of 6–7 m, initially using a shot line to control ascent rate, and considers the risk small and the benefit significant in view of their statistics which showed an incidence of roughly 16 free ascents per 10,000 dives.
In 1978 the SSAC recommended responses to an air supply failure, in order of preference, were: making use of a companion's octopus rig; then by breathing from an ABLJ; then by a shared ascent and; as a last resort, by free ascent.
CMAS The only reference to emergency ascent training in the CMAS Diver Training Program (CMAS TC Version 9/2002) is in the 1-star course where Controlled buoyancy lift of victim to surface is specified under practical training of rescue skills.
Commercial and scientific diving Use of a bailout cylinder is the primary source of emergency breathing gas recommended by several codes of practice for scientific and commercial divers.
Choice of procedure:
The scuba diver perceives an out of air emergency.
An option is chosen: If a bailout cylinder is carried, the diver switches to personal bailout gas and makes a normal ascent.
If the diver is not carrying a bailout cylinder, and another diver is in the immediate vicinity, the diver may request gas from the other diver.
If the other diver has the gas available and is both willing and competent to provide it, the donor provides emergency gas and the two divers make an assisted emergency ascent while sharing gas using a single demand valve or octopus demand valve, or supplying the receiver from the donor's bailout set.
If the other diver does not help, the distressed diver must make an unassisted emergency ascent.
If there is no other diver in the immediate vicinity, the diver must make an unassisted emergency ascent.
Choice of procedure:
If the diver judges the risk of an unassisted emergency ascent to be sufficiently low, or relatively low compared to the other available options, he/she may choose to do an unassisted emergency ascent although other options may technically exist.When there is no physical or physiological constraint (such as excessive depth, a physical overhead or a decompression obligation) preventing a direct ascent to the surface, an unassisted emergency ascent may be the lowest risk option, as it eliminates the unknowns associated with finding and requesting aid from another diver. These unknowns may be minimised by training, practice, prior agreement, and adherence to suitable protocols regarding equipment, planning, dive procedures and communication.
Scuba procedures:
Ascent while breathing from the buoyancy compensator An alternative emergency breathing air source may be available via the buoyancy compensator.
Scuba procedures:
There are two possibilities for this: If the buoyancy compensator has an inflation gas supply from an independent, dedicated cylinder, this gas can be breathed by the diver by using the inflation valves and the oral inflation mouthpiece. BC inflation cylinders are neither common, nor usually very large, so the amount of air will be small and generally insufficient for staged decompression, but a few breaths on the way up can make a big difference to the stress level of the diver, and may prevent loss of consciousness.
Scuba procedures:
If the buoyancy compensator is supplied from the breathing gas cylinder, the volume available will be extremely limited, but it will expand during ascent, and instead of dumping it to reduce excess buoyancy, it may be breathed by the diver. Anyone who considers this as an option should ensure that the interior of the BC is decontaminated before use, as it is an environment in which pathogens may breed.
Scuba procedures:
Buoyant ascent Ascent where the diver is propelled towards the surface by positive buoyancy. Generally recommended as a last resort, though a sufficiently skilled diver could control ascent rate by precise dumping from the BC and use this as a low energy alternative to a swimming ascent. In this case weights should not be ditched during the ascent.
Scuba procedures:
Positive buoyancy may be established by inflation of the BC or dry suit, or by ditching weights. Buoyancy from added air can be controlled during ascent by dumping, but the effect of ditched weights is not reversible, and usually increases as the surface is approached, particularly if a thick wetsuit is worn. If weight can be ditched partially, this may be a better option, unless the diver feels that he is about to lose consciousness, in which case a substantial increase in buoyancy may be better.
Scuba procedures:
A method of buoyancy control which will automatically jettison weights if the diver loses consciousness during the ascent is to take them off and hold them in a hand while surfacing. If the diver loses consciousness, the weights will drop and positive buoyancy will take the diver the rest of the way to the surface.
Scuba procedures:
Controlled emergency swimming ascent (CESA) Controlled emergency swimming ascent is a technique used by scuba divers as an emergency procedure when a diver has run out of breathing gas in shallow water and must return to the surface. During the ascent, the diver propels him/herself towards the surface at a safe ascent rate by means of swimming, usually finning, with continuous exhalation at a rate unlikely to cause injury to the diver by lung overexpansion, and remains under control.
Scuba procedures:
The technique involves simply ascending at a controlled pace, typically about 18 metres (60 feet) per minute, while exhaling slowly. As the diver ascends, the air in the lungs expands as surrounding water pressure decreases. Exhaling allows excess volume to escape from the lungs, and by exhaling at a suitable rate the diver can continue exhaling throughout the ascent and still have air in his or her lungs at the surface. If the diver fails to exhale during the ascent, lung over-expansion injury is likely to occur. If exhalation is limited to relaxing and allowing the expanding gas to escape without effort, there should not be a feeling of running out of breath, as the air inhaled at depth expands during the ascent and the lung volume should remain nearly constant.
Scuba procedures:
This procedure is recommended for ascents where there is no decompression obligation, a free surface with little risk of entanglement, and the diver has sufficient breath hold capacity to easily reach the surface conscious.
Advantages of this method, when applicable, are that no outside assistance or special equipment is required.
Scuba procedures:
Disadvantages are that it requires the diver to reach the surface in a limited time, which does not allow for staged decompression, possible delays due to entanglement or snags, or long distances to reach the surface. It also requires the diver to produce propulsive effort, which reduces potential endurance on the single breath or limited gas available.Use of the continuous exhalation procedure from moderately (neutrally or relaxed) inflated lungs combines the advantages of lower risk of lung injury compared to either full or empty lungs with improved endurance due to more available oxygen. Keeping the DV in the mouth and attempting to breathe normally or slowly from it may provide additional breaths as the ambient pressure reduces, and helps ensure that the airways remain open.
Scuba procedures:
If the diver is neutrally buoyant at the time that the ascent is initiated, the amount of energy required to reach the surface will be minimised, and frequent controlled venting of the buoyancy compensator can keep the ascent rate under fine control.
Scuba procedures:
While in a practical sense there is little difference between a CESA and a "free ascent" (aka Emergency Swimming Ascent or ESA), the technical difference between the two is that in a CESA the regulator second stage is retained in the mouth and the diver exhales through it (in case gas becomes available due to the drop in ambient pressure) while in free ascent, the regulator is not retained or there is no regulator available, and the diver exhales directly into the water.
Scuba procedures:
Buddy breathing ascent Ascent during which the diver is provided with breathing gas from the same demand valve (second stage regulator) as the donor, and they breathe alternately. The out-of air diver must attract the attention of a nearby diver and request to share air. If the chosen donor has sufficient gas, and is competent to share by this method, an emergency ascent may be accomplished safely. Accurate buoyancy control is still required, and the stress of controlling the ascent rate and maintaining the breathing procedure can be more than some divers can handle. There have been occurrences of uncontrolled ascent and panic, in some cases with fatal consequences to both divers. This procedure is best suited to divers who are well acquainted with each other, well practiced in the procedure, and highly competent in buoyancy control and ascent rate control. In most circumstances analysis of the risk would indicate that the divers should have an alternative breathing gas source in preference to relying on buddy breathing. Failure to provide alternative breathing gas without good reason would probably be considered negligent in professional diving.
Scuba procedures:
Assisted ascent Also known as octopus assisted ascent, assisted ascent is an emergency ascent during which the diver is provided with breathing gas by another diver via a demand valve other than the one in use by the donor during the ascent. This may be supplied from the same or a different cylinder, and from the same or a separate 1st stage regulator. The divers' breathing patterns are not constrained by each other, and they may breathe simultaneously. Task loading is reduced in comparison with buddy breathing, and the divers can concentrate on controlling the ascent.
Scuba procedures:
Lifeline assisted ascent An ascent where the diver is pulled to the surface by the line tender, either as a response to an emergency signal from the diver, or a failure to respond to signals from the surface. A diver may also be assisted in the ascent by the line tender in a normal ascent, particularly divers in standard dress, where it was often the normal operating procedure.
Scuba procedures:
Controlled buoyant lift The controlled buoyant lift is an underwater diver rescue technique used by scuba divers to safely raise an incapacitated diver to the surface from depth. It is the primary technique for rescuing an unconscious diver from the bottom. It can also be used where the distressed diver has lost or damaged his or her diving mask and cannot safely ascend without help, though in this case the assisted diver would normally be able to control their own buoyancy.The standard PADI-trained technique is for the rescuer to approach the face-down unconscious diver (victim) from above and kneel with one knee either side of his or her diving cylinder. Then, with the victim's diving regulator held in place, the tank is gripped firmly between the knees and the rescuer's buoyancy compensator is used to control a slow ascent to the surface. This method may not work with sidemount or twin cylinder sets, and puts both rescuer and victim at increased risk if the rescuer loses grip, as the victim will sink and the rescuer may make an excessively fast uncontrolled ascent.
Scuba procedures:
In the technique taught by BSAC and some other agencies, the rescuer faces the casualty and uses the casualty's buoyancy compensator to provide buoyancy for both divers as the rescuer makes a controlled ascent. If the casualty is not breathing, the ascent will be urgent. If the two divers separate during the ascent, the use of the casualty's buoyancy is intended as a failsafe causing the casualty to continue to the surface where there is air and other rescuers can help. The rescuer will be negative at this point, but this is generally easily compensated by finning and corrected by inflation of the rescuer's BC.
Scuba procedures:
Tethered ascent Ascent controlled by a line attached to the diver and to a fixed point at the bottom, with the line paid out by the diver to control depth and rate of ascent when the diver has inadvertently lost full control of buoyancy due to loss of ballast weight, so cannot attain neutral buoyancy at some point during the ascent, and needs to do decompression. CMAS require this skill for their Self-Rescue Diver certification, using a ratchet reel to control the line, though other methods may be feasible. The diver must ensure that gas can be released from the buoyancy compensator and dry suit, if applicable, throughout the ascent, to avoid aggravating the problem by trapped gas expansion. This basically requires the diver to ascend with the feet down and dump valves up, an orientation which can be achieved by hooking a leg around the line. Clipping the reel to the harness should prevent accidentally losing the reel during the ascent. Depending on how the line is attached at the bottom, it may be necessary to cut loose and abandon the line after surfacing.
Surface supplied procedures:
Ascent on bailout gas The diver opens the bailout valve on the helmet, bandmask or harness mounted bailout block. This opens the supply of breathing gas from the bailout cylinder carried by the diver to the demand valve of the breathing apparatus. The bailout gas volume carried by the diver is usually required to be sufficient to return to a place of safety where more gas is available, such as the surface, diving stage or wet or dry bell.
Surface supplied procedures:
Ascent on pneumo air Another option for the surface supplied diver is to breathe air supplied through the pneumofathometer hose of the umbilical. The diver inserts the hose into the air space of the helmet of full face mask, and the panel operator opens the supply valve sufficiently to provide enough air to breathe on free flow. Pneumo air can be supplied to another diver by a rescuer in the surface supply equivalent of Octopus air sharing. This procedure would save the bailout gas which would then be available if the situation deteriorates further. Pneumo breathing air supply is not applicable to environmentally sealed suits for contaminated environments.
Surface supplied procedures:
Bell or stage abandonment In the event that a wet bell or stage cannot be recovered from a dive on schedule, it may be necessary for the divers to abandon it and make an autonomous ascent. This may be complicated by decompression obligations or compromised breathing gas supply, and may involve the assistance of a surface standby diver. The procedure depends on whether the divers' breathing gas is supplied directly from the surface (type 1 wet bell) or is supplied from a gas panel in the bell, via the bell umbilical (type 2 wet bell).
Surface supplied procedures:
To abandon a type 1 wet bell or stage, the divers simply exit the bell on the side that the umbilicals enter, ensuring that they are not looped around anything. This is reliably done by having the surface tender take up slack while returning to the bell and following the umbilical out the other side, after which the tender can simply raise the diver as if there were no bell.
Surface supplied procedures:
On a type 2 bell, the divers' umbilicals are connected to the gas panel in the bell, and the procedure used should minimise the risk of the umbilical snagging during the ascent and forcing the diver to descend again to free it. If the diver excursion umbilical is not long enough to allow the diver to reach the surface, the standby diver will have to disconnect the bell diver's umbilical, and the rest of the ascent may be done on bailout, pneumo supply from the standby diver, or the standby diver can connect a replacement umbilical.
Saturation diving:
The only viable form of emergency ascent by a saturation diver is inside a closed and pressurised bell. This can be in the form of an emergency recovery of the original bell, or by through water transfer to another bell at depth. A form of unassisted emergency ascent for a bell with functioning lock and external ballast, is to release the ballast from inside the sealed bell, allowing inherent buoyancy to lift the bell to the surface.
Hazards:
Lung overpressure accidents The most direct and well publicised hazard is lung overpressure due to either a failure on the part of the diver to allow the expanding air in the lungs to escape harmlessly, or entrapment of air due to circumstances beyond the control of the diver. Lung overpressure can lead to fatal or disabling injury, and can occur during training exercises, even when reasonable precautions have been taken. There is some evidence that a full exhalation at the start of the ascent in the "blow and go" scenario, can lead to partial collapse of some of the smaller air passages, and that these can then trap air during the ascent sufficiently to cause tissue rupture and air embolism. The procedure of slowly letting the air escape during ascent can also be taken too far, and not allow the air to escape fast enough, with similar consequences. Attempting to breathe off the empty cylinder is one way of potentially avoiding these problems, as this has the double advantage of keeping the airways open more reliably, and in most cases allowing the diver several more breaths during the ascent as the reduced ambient pressure allows more of the residual cylinder air to pass through the regulator and become available to the diver. A 10-litre cylinder ascending 10 metres will produce an extra 10 litres of free air (reduced to atmospheric pressure). At a tidal volume of about 1 litre this would give several breaths during ascent, with increased effectiveness nearer the surface. Of course this air is not available in some cases, such as a rolled off cylinder valve, burst hose, blown o-ring, or lost second stage, where the failure is not simply breathing all the air down to the pressure where the regulator stops delivering, but if it is possible, the demand valve can be kept in the mouth and the diver can continue to attempt to breathe from it during a free ascent.
Hazards:
Loss of consciousness due to hypoxia One of the dangers of a free ascent is hypoxia due to using up the available oxygen during the ascent. This can be aggravated if the diver fully exhales at the start of the ascent in the "blow and go" technique, if the diver is so heavy that swimming upwards requires strong exertion, or if the diver is already stressed and short of breath when the air supply is lost. Loss of consciousness during ascent is likely to lead to drowning, particularly if the unconscious diver is negatively buoyant at that point and sinks. On the other hand, a fit diver leaving the bottom with a moderate lungful of air, relatively unstressed, and not overexerted, will usually have sufficient oxygen available to reach the surface conscious by direct swimming ascent with constant exhalation at a reasonable rate of between 9 and 18 metres per minute from recreational diving depths (30 m or less), provided his or her buoyancy is close to neutral at the bottom.
Hazards:
Decompression sickness The risk of decompression sickness during an emergency ascent is probably no greater than the risk during a normal ascent at the same ascent rate after the same dive profile. In effect, the same ascent rate and decompression profile should be applied in an emergency ascent as in a normal ascent, and if there is a decompression requirement in the planned dive, steps should be taken to mitigate the risk if having to make an ascent without stops. The most straightforward and obviously effective method is for the diver to carry a bailout set sufficient to allow the planned ascent profile if the primary gas supply fails. This makes each diver independent on the availability of air from a buddy, but may cause extra task loading and physical loading of the diver due to the extra equipment needed. This method is extensively used by commercial and scientific divers, solo recreational divers, and some technical and recreational divers who prefer self-reliance.
Hazards:
When all else fails, the consequences of missing some decompression time are usually less severe than death by drowning.
Drowning Drowning is the most likely consequence of a failure to reach the surface during an independent emergency ascent, and is a significant risk even if the diver reaches the surface if he or she loses consciousness on the way.
Mitigation of hazards:
The most generally effective method is for each diver to carry an independent bailout set sufficient to safely reach the surface, after completing all required decompression for the planned dive profile. This is relatively expensive and many recreational divers have never been trained in this skill, so there may be unacceptable additional task loading to carry and use the equipment.
Mitigation of hazards:
An economical and effective method of reducing risk while sharing air is use of secondary (octopus) demand valves. This is effective only if the buddy is available for sharing at the time of the emergency.
If it is possible, the demand valve can be kept in the mouth and the diver can continue to attempt to breathe from it during a free ascent.
Mitigation of hazards:
If the diver is in reasonable doubt of remaining conscious all the way to the surface, positive buoyancy provided by either suit or BC inflation, or by shedding weights can ensure that if the diver does lose consciousness, he/she will at least float to the surface, where there is a better chance of rescue than sinking back to the bottom and almost certainly drowning.
Mitigation of hazards:
Diving in teams of two or three divers who are adequately trained and equipped with similar equipment so that emergency procedures are facilitated, and ensuring that the team are always close enough to respond in time to an emergency.
The diver should not waste time while making the choice of which emergency ascent procedure to use. A controlled swimming ascent is the most recommended default for recreational diving. Divers who venture beyond the safe zone for controlled swimming ascent should be prepared for their most appropriate option at all times.
Some lung pathologies increase the risk of lung overpressure injury significantly. Divers can inform themselves of these increased risks by undergoing appropriate medical examinations.
In the event that a free ascent is required, the lung volume should neither be too large nor too small, as both extremes increase the risk of injury. A volume within the normal relaxed range should be suitable. Forceful exhalation before ascent increases the risk of lung injury, and reduces the available oxygen.
Pre-dive discussions and checks to ensure that all members of the dive team are aware of and agree with the procedures to be used if there is an emergency during the dive, and that they are all familiar with the equipment and equipment configuration of all members of the team.
Adequate emergency ascent procedure training, and sufficient practice to remain adept in the requisite skills.
During octopus assisted or buddy breathing ascents, divers should remain in close contact, and keep control of their buoyancy.
A first stage regulator which is to be used with an octopus demand valve should be able to supply the required flow rate without freezing up if the water is cold.
Freediving:
In freediving the usual emergency ascent involves ditching the diver's weightbelt to increase buoyancy and reduce the effort required. This generally establishes positive buoyancy and gives the diver a chance of not drowning if they lose consciousness before reaching the surface and are assisted by another diver, or are lucky enough to float face upwards and draw a breath. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fluxion**
Fluxion:
A fluxion is the instantaneous rate of change, or gradient, of a fluent (a time-varying quantity, or function) at a given point. Fluxions were introduced by Isaac Newton to describe his form of a time derivative (a derivative with respect to time). Newton introduced the concept in 1665 and detailed them in his mathematical treatise, Method of Fluxions. Fluxions and fluents made up Newton's early calculus.
History:
Fluxions were central to the Leibniz–Newton calculus controversy, when Newton sent a letter to Gottfried Wilhelm Leibniz explaining them, but concealing his words in code due to his suspicion. He wrote: I cannot proceed with the explanations of the fluxions now, I have preferred to conceal it thus: 6accdæ13eff7i3l9n4o4qrr4s8t12vx.
The gibberish string was in fact a hash code (by denoting the frequency of each letter) of the Latin phrase Data æqvatione qvotcvnqve flventes qvantitates involvente, flvxiones invenire: et vice versa, meaning: "Given an equation that consists of any number of flowing quantities, to find the fluxions: and vice versa".
Example:
If the fluent y is defined as y=t2 (where t is time) the fluxion (derivative) at t=2 is: y˙=ΔyΔt=(2+o)2−22(2+o)−2=4+4o+o2−42+o−2=4o+o2o Here o is an infinitely small amount of time. So, the term o2 is second order infinite small term and according to Newton, we can now ignore o2 because of its second order infinite smallness comparing to first order infinite smallness of o . So, the final equation gets the form: y˙=ΔyΔt=4oo=4 He justified the use of o as a non-zero quantity by stating that fluxions were a consequence of movement by an object.
Criticism:
Bishop George Berkeley, a prominent philosopher of the time, denounced Newton's fluxions in his essay The Analyst, published in 1734. Berkeley refused to believe that they were accurate because of the use of the infinitesimal o . He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus.
Criticism:
Towards the end of his life Newton revised his interpretation of o as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit. He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largely replaced Newton's fluxions and fluents, and remains in use today. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CSTF2**
CSTF2:
Cleavage stimulation factor 64 kDa subunit is a protein that in humans is encoded by the CSTF2 gene.This gene encodes a nuclear protein with an RRM (RNA recognition motif) domain. The protein is a member of the cleavage stimulation factor (CSTF) complex that is involved in the 3' end cleavage and polyadenylation of pre-mRNAs. Specifically, this protein binds GU-rich elements within the 3'-untranslated region of mRNAs.
Interactions:
CSTF2 has been shown to interact with CSTF3, SUB1, SYMPK, BARD1 and BRCA1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gay bar**
Gay bar:
A gay bar is a drinking establishment that caters to an exclusively or predominantly lesbian, gay, bisexual, transgender or queer (LGBTQ+) clientele; the term gay is used as a broadly inclusive concept for LGBTQ+ communities.
Gay bar:
Gay bars once served as the centre of gay culture and were one of the few places people with same-sex orientations and gender-variant identities could openly socialize. Other names used to describe these establishments include boy bar, girl bar, gay club, gay pub, queer bar, lesbian bar, drag bar, and dyke bar, depending on the niche communities that they served.
Gay bar:
With the advent of the Internet and an increasing acceptance of LGBTQ+ people across the Western world, the relevance of gay bars in the LGBTQ+ community has somewhat diminished. In areas without a gay bar, certain establishments may hold a gay night instead.
History:
Gathering places favoured by homosexuals have operated for centuries. Reports from as early as the 17th century record the existence of bars and clubs that catered to, or at least tolerated, openly gay clientele in several major European cities. The White Swan (created by James Cook and Yardley, full name unknown), on Vere Street, in London, England, was raided in 1810 during the so-called Vere Street Coterie. The raid led to the executions of John Hepburn and Thomas White for sodomy. The site was the scene of alleged gay marriages carried out by the Reverend John Church.It is not clear which place is the first gay bar in the modern sense. In Cannes, France, such a bar had already opened in 1885, and there were many more in Berlin around 1900. In the United Kingdom and the Netherlands gay bars were established throughout the first quarter of the 20th century.
History:
France The very first gay bar in Europe and probably in the world was the Zanzibar in Cannes on the French Riviera. The Zanzibar was opened in 1885 and existed for 125 years, before it was closed in December 2010. Among its visitors were many artists, like actor Jean Marais and comedians Thierry Le Luron and Coluche.
History:
Paris became known as a centre for gay culture in the 19th century, making the city a queer capital during the early 20th century, when the Montmartre and Pigalle districts were meeting places of the LGBTQ+ community. Although Amsterdam, Berlin, and London had more meeting places and organizations than Paris, the latter was known for the "flamboyance" of LGBTQ+ quarters and "visibility" of LGBTQ+ celebrities.Paris retained the LGBTQ+ capital image after the end of World War II, but the center of the meeting place shifted to Saint-Germain-des-Prés. In the 1950s and 1960s the police and authorities tolerated homosexuals as long as the conduct was private and out of view, but gay bar raids occurred and there were occasions when the owners of the bars were involved in facilitating the raids. Lesbians rarely visited gay bars and instead socialized in circles of friends. Lesbians who did go to bars often originated from the working class. Chez Moune, opened in 1936, and New Moon were 20th-century lesbian cabarets located in Place Pigalle, which converted to mixed music clubs in the 21st century.Since the 1980s, the Le Marais district is the center of the gay scene in Paris.
History:
Germany In Berlin, there was gay and lesbian night life already around 1900, which throughout the 1920s became very open and vibrant, especially when compared to other capitals. Especially in the Schöneberg district around Nollendorfplatz there were many cafes, bars and clubs, which also attracted gay people who had to flee their own country in fear of prosecution, like for example Christopher Isherwood. The gay club Eldorado in the Motzstraße was internationally known for its transvestite shows. There was also a relatively high number of places for lesbians. Within a few weeks after the Nazis took over government in 1933, fourteen of the best known gay establishments were closed. After homosexuality was decriminalized in 1969, many gay bars opened in West Berlin, resulting in a lively gay scene.
History:
In Munich, a number of gay and lesbian bars are documented as early as the Golden Twenties. Since the 1960s, the Rosa Viertel (pink quarter) developed in the Glockenbachviertel and around Gärtnerplatz, which in the 1980s made Munich "one of the four gayest metropolises in the world" along with San Francisco, New York City and Amsterdam. In particular, the area around Müllerstraße and Hans-Sachs-Straße was characterized by numerous gay bars and nightclubs. One of them was the travesty nightclub Old Mrs. Henderson, where Freddie Mercury, who lived in Munich from 1979 to 1985, filmed the music video for the song Living on My Own at his 39th birthday party. Other gay venues include Pompon Rouge, Mandy's Club, Pimpernel nightclub, the bar Mylord, the Ochsengarten, which was "Germany's first bar for leather men", as well as the gay hotel-pub Deutsche Eiche. Regulars in many of these bars and nightclubs include, for example, Freddie Mercury, Rainer Werner Fassbinder, Walter Sedlmayr (who met his later murderer in the Pimpernel), Inge Meysel and Hildegard Knef.
History:
United Kingdom In the 18th century, molly houses were clandestine clubs where gay men could meet, drink, dance and have sex with each other. One of the most famous was Mother Clap's Molly House.The first gay bar in Britain in the modern sense was The Cave of the Golden Calf, established as a night club in London. It opened in an underground location at 9 Heddon Street, just off Regent Street, in 1912 and became a haunt for the wealthy, aristocratic and bohemian. Its creator Frida Strindberg née Uhl set it up as an avant-garde and artistic venture. The club provided a solid model for future nightclubs.
History:
After homosexuality was decriminalized in the UK in 1967, gay bar culture became more visible and gradually Soho became the centre of the London LGBTQ+ community, which was "firmly established" by the early 1990s. Gay bars, cafes, restaurants and clubs are centred on Old Compton Street.
Other cities in the UK also have districts or streets with a concentration of gay bars, like for example Stanley Street Quarter in Liverpool, Canal Street in Manchester and the Birmingham Gay Village.
History:
Netherlands In Amsterdam, there were already a few gay bars in the first quarter of the 20th century. The best known was The Empire, in Nes, which was first mentioned in 1911 and existed until the late 1930s. The oldest that still exists is Café 't Mandje, which was opened in 1927 by lesbian Bet van Beeren. It closed in 1982, but was reopened in 2008.
History:
After World War II, the Amsterdam city government acted rather pragmatic and tolerated the existence of gay bars. In the 1960s their number grew rapidly and they clustered in and around a number of streets, although this was limited to bars, clubs and shops and they never became residential areas for gays, like the gay villages in the US.
History:
Since the late 1950s the main Amsterdam gay street was Kerkstraat, which was succeeded by Reguliersdwarsstraat in the early 1980s, when the first openly gay places opened here, like the famous cafe April in 1981, followed by dancing Havana in 1989. Other streets where there are still concentrations of gay bars are Zeedijk, Amstel and Warmoesstraat, the latter being the center of the Amsterdam leather scene, where the first leather bar already opened around 1955. The Queen's Head is a gay bar located at Zeedijk 20 in the centre of Amsterdam Denmark The bar Centralhjørnet in Copenhagen opened in 1917 and became a gay bar in the 1950s. It now claims to be one of the oldest gay bars in Europe. The main Copenhagen gay district is the Latin Quarter.
History:
Russia Because of the high prevalence of homophobia in Russia, patrons of gay bars there often have had to be on the alert for bullying and attacks. In 2013, Moscow's largest gay bar, Central Station, had its walls sprayed with gunfire, had harmful gas released into a crowd of 500 patrons, and had its ceiling nearly brought down by a gang who wanted to crush the people inside. Nonetheless, gay nightlife is increasing in Moscow and St. Petersburg, offering drag shows and Russian music, with some bars also offering discreet gay-only taxi services.
History:
Spain Under the dictatorship of General Francisco Franco from 1939 to 1975, homosexuality was illegal. However, in 1962, Spain's first gay bar, Tony's, opened in Torremolinos and a clandestine gay bar scene also emerged in the 1960s and early 1970s in Barcelona.
History:
United States There are many institutions in the United States that claim to be the oldest gay bar in that country. Since Prohibition ended in 1933, there are a number of notable gay bars that have opened: The Atlantic House in Provincetown, Massachusetts, was constructed in 1798 and was a tavern and stagecoach stop before becoming a de facto gay bar after artists and actors, including Tennessee Williams, began spending summers in Provincetown in the 1920s.
History:
The Black Cat Bar, founded in 1906 and operated again after Prohibition was ended in 1933, was located in San Francisco's North Beach neighborhood and was the focus of one of the earliest victories of the homophile movement. In 1951, the California Supreme Court affirmed the right of homosexuals to assemble in a case brought by the heterosexual owner of the bar.
History:
One of the first lesbian bars was the famous Eve's Hangout, also called Eve Adams Tearoom. It closed after a police raid in 1926. Eva Kotchever, the owner, was deported to Europe and murdered at Auschwitz.
The Black Cat Tavern opened in November 1966 and was one of many LGBTQ+ bars to be raided, which happened on New Year's Day in 1967. It is now considered a Los Angeles Historic-Cultural Monument.
The Double Header in Seattle's Pioneer Square is claimed to be the oldest gay bar on the North American West Coast, operating since 1933.
History:
Esta Noche was the first gay Latino bar in San Francisco; it opened in 1979. It was located on Mission Street and 16th Street. It closed down in 1997 as one of the last gay Latino bars in the Mission District.Maud's Study (961 Cole Street, San Francisco), featured in the film Last Call at Maud's, was a lesbian bar which was founded by Rikki Streicher in 1966 and closed in September 1989. At closing, it claimed to be the oldest continuously operating lesbian bar. It closed during the AIDS crisis when a "clean and sober" mentality drove down a lot of bars.
History:
In New York City, the modern gay bar dates to Julius Bar, founded by local socialite Matthew Nicol, where the Mattachine Society staged a "Sip-In" on 21 April 1966 challenging a New York State Liquor Authority rule prohibited serving alcoholic beverages to gays on the basis that they were considered disorderly. The court ruling in the case that gays could peacefully assemble at bars would lead to the opening of the Stonewall Inn a block southwest in 1967, which in turn led to the 1969 Stonewall Riots. Julius is New York City's oldest continuously operating gay bar.
History:
Korner Lounge (1933) of Shreveport, Louisiana is believed to be the second oldest continuously operating gay bar in the country.
Cafe Lafitte in Exile in New Orleans, dating back to 1933 and the end of Prohibition, claims to be the oldest continuously operating gay bar in the United States.
The White Horse Inn in Oakland, California, also operating legally since the end of Prohibition, but likely during the period where sales of alcohol were banned in the U.S., also claims to be the oldest gay bar in operation.
History:
Mexico Because of a raid on a Mexico City drag ball in 1901, when 41 men were arrested, the number 41 has come to symbolize male homosexuality in Mexican popular culture, figuring frequently in jokes and in casual teasing. The raid on the "Dance of the 41" was followed by a less-publicized raid of a lesbian bar on 4 December 1901 in Santa Maria. Despite the international depression of the 1930s and along with the social revolution overseen by Lázaro Cárdenas (1934–1940), the growth of Mexico City was accompanied by the opening of gay bars and gay bathhouses. During the Second World War, ten to fifteen gay bars operated in Mexico City, with dancing permitted in at least two, El África and El Triunfo. Relative freedom from official harassment continued until 1959 when Mayor Ernesto Uruchurtu closed every gay bar following a grisly triple-murder. But by the late 1960s several Mexican cities had gay bars and, later, U.S.-style dance clubs. These places, however, were sometimes clandestine but tolerated by local authorities, which often meant that they were allowed to exist so long as the owners paid bribes. A fairly visible presence was developed in large cities such as Guadalajara, Acapulco, Veracruz and Mexico City. Today, Mexico City is home to numerous gay bars, many of them located in the Zona Rosa, particularly on Amberes street, while a broad and varied gay nightlife also flourishes in Guadalajara, Acapulco, in Cancun attracting global tourists, Puerto Vallarta which attracts many Americans and Canadians, and Tijuana with its cross-border crowd. However, there are at least several gay bars in most major cities.
History:
Singapore The first recorded use of the term "gay bar" is in the diaries of homosexual British comedian Kenneth Williams: "16 January 1947. Went round to the gay bar which wasn't in the least gay." At the time Williams was serving in the British Army in Singapore. In the 1970s, straight nightclubs began to open their doors to gay clients on designated nights of the week. In the 1980s, a lesbian bar named Crocodile Rock opened in Far East Plaza, which remains to this day the oldest lesbian bar in Singapore. Today, many gay bars are located on the Neil Road stretch, from Taboo and Tantric, to Backstage Bar, May Wong's Café, DYMK and Play. Mega-clubs like Zouk and Avalon are also a big draw for the gay crowd.
History:
China The oldest gay bar in Beijing is the Half-and-Half, which in 2004 had been open over ten years. The first lesbian bar in China (also in Beijing) was Maple Bar, opened in 2000 by pop singer Qiao Qiao. The On/Off was a popular bar for both gay men and lesbians. The increase in China's gay and lesbian bars in recent years is linked to China's opening up to global capitalism and its consequent economic and social restructuring.
History:
Japan The oldest continuously operating Japanese gay bar, New Sazae, opened in Tokyo in 1966. Most gay bars in Tokyo are located in the Shinjuku Ni-chōme district, which is home to about 300 bars. Each bar may only have room to seat about a dozen people; as a result, many bars are specialized according to interest.
History:
South Korea In Seoul, most gay bars were originally congregated near the Itaewon area of Seoul, near the U.S. military base. But in recent years, more clubs have located in the Sinchon area, indicating that "safe spaces" for Korean LGBTQ+ people have extended beyond the foreign zones, which were traditionally more tolerant. One male bar patron said Korean bar culture was not as direct as in the United States, with customers indicating their interest in another customer by ordering him a drink through a waiter. The oldest lesbian bar in Seoul is Lesbos, which started in 1996.
History:
Jordan Jordan's most famous and oldest gay-friendly establishment is a combination bar/cafe/restaurant and bookshop in Amman called Books@cafe, opened in 1997. When the bar was first opened, it was infiltrated by government undercover agents who were concerned about its effect on public morality and outed the owner as homosexual to his family and friends. Now, however, the owner claims to have no problem with the government and has since opened a second establishment.
History:
South Africa The history of gay and lesbian bars in South Africa reflects the racial divisions that began in the Apartheid era and continue, to some extent, in the 21st century.The first white gay bar opened in the Carlton Hotel in downtown Johannesburg in the late 1940s, catering exclusively to men of wealth. In the 1960s, other urban bars began to open that drew more middle and working class white men; lesbians were excluded. The language of Gayle had its roots in the Cape Coloured and Afrikaans-speaking underground gay bar culture. In 1968, when the government threatened to pass repressive anti-gay legislation, queer culture went even further underground, which meant clubs and bars were often the only places to meet. These bars were often the targets of police raids. The decade of the 1970s was when urban gay clubs took root. The most popular gay club of Johannesburg was The Dungeon, which attracted females as well as males, and lasted until the 1990s. The 1979 police assault on the New Mandy's Club, in which patrons fought back, has been referred to as South Africa's Stonewall.In the 1980s, police raids on white gay clubs lessened as the apartheid government forces found itself dealing with more and more resistance from the black population. In the black townships, some of the shebeens, unlicensed bars established in people's homes and garages, catered to LGBTQ clients. During the struggle against apartheid, some of these shebeens were important meeting places for black gay and lesbian resistance fighters. Lee's, a shebeen in Soweto, for example, was used as a meeting place for black gay men who were part of the Gay Association of South Africa (GASA) but did not feel welcome in the GASA offices.With the establishment of the post-apartheid 1996 constitution that outlawed discrimination based on sexual orientation as well as race, South Africa's gay night life exploded, though many bars continued to be segregated by race, and fewer blacks than whites go to the urban bars. The 2005 inaugural gay shebeen tour was advertised as a gay pub crawl that would provide an opportunity for South Africans and foreigners to "experience true African gay Shebeen culture".
HIV/AIDS impact:
Gay bars have been heavily impacted by the HIV/AIDS epidemic. For example, San Francisco had over 100 gay bars when the epidemic hit in the early 1980s; by 2011 there were only about 30 remaining. Millions of gay men around the world died during the worst years of the epidemic (before effective treatment) which resulted in fewer gay men owning and patronizing gay bars.
HIV/AIDS impact:
Gay bars have always been a place of refuge and support for gay men impacted by the virus. Many fundraising, testing, support group, and free condom events are present at gay bars.
Today:
A few commentators have suggested that gay bars are facing decline in the contemporary age due to the pervasiveness of technology. Andrew Sullivan argued in his 2005 essay "The End of Gay Culture" that gay bars are declining because "the Internet dealt them a body blow. If you are merely looking for sex or a date, the Web is now the first stop for most gay men".June Thomas explained the decline by noting that there is less need for gay-specific venues like bars because gay people are less likely to encounter discrimination or be made unwelcome in wider society. Entrepreneur magazine in 2007 included them on a list of ten types of business that would be extinct by 2017 along with record stores, used bookstores and newspapers.Many commentators have argued there has been some recent decline in gay-specific venues mainly due to the modern effects of gentrification. But despite the decline, gay bars still exist in relatively strong numbers and thrive in most major cities where male homosexuality is not heavily condemned. They also asserted most gay men never stopped finding great value in gay-specific venues and being in the company of other gay men. Unlike gay bars, lesbian bars have become a rarity around the world. Many articles have been published discussing possible reasons as to why lesbian bars struggle to exist despite a growing lesbian population.
Background:
Like most bars and pubs, gay bars range in size from the small, five-seat bars of Tokyo to large, multi-story clubs with several distinct areas and more than one dance floor. A large venue may be referred to as a nightclub, club, or bar, while smaller venues are typically called bars and sometimes pubs. The only defining characteristic of a gay bar is the nature of its clientele. While many gay bars target the gay and/or lesbian communities, some (usually older and firmly established) gay bars have become gay, as it were, through custom, over a long period of time.
Background:
The serving of alcohol is the primary business of gay bars and pubs. Like non-gay establishments they serve as a meeting place and LGBTQ+ community focal point, in which conversation, relaxation, and meeting potential romantic and sexual partners is the primary focus of the clientele. Historically and continuing in many communities, gay bars have been valued by patrons as the only place closeted gay men and lesbians can be open and demonstrative about their sexuality without fear of discovery. Gerard Koskovich of the Gay, Lesbian, Bisexual, Transgender Historical Society explains that "[Gay bars] were a public place where gay people could meet and start to have a conversation, where they didn't feel like sexual freaks or somehow not part of the larger social fabric; from that came culture, politics, demands for equal rights."Gay bars traditionally preferred to remain discreet and virtually unidentifiable outside the gay community, relying exclusively on word of mouth promotion. More recently, gay clubs and events are often advertised by handing out eye-catching flyers on the street, in gay or gay-friendly shops and venues, and at other clubs and events. Similar to flyers for predominantly heterosexual venues, these flyers frequently feature provocative images and theme party announcements.
Background:
While traditional gay pub-like bars are nearly identical to bars catering to the general public, gay dance venues often feature elaborate lighting design and video projection, fog machines and raised dancing platforms. Hired dancers (called go-go girls or go-go boys) may also feature in decorative cages or on podiums. Gay sports bars are relatively unusual, but it is not unusual for gay bars to sponsor teams in local sports/game leagues, and many otherwise traditional gay pubs are well known for hosting post-game parties—often filling with local gay athletes and their fans on specific nights or when major professional sporting events are broadcast on TV. Some of the longest established gay bars are unofficial hosts of elaborate local 'Royal Court' drag pageants and drag-related social groups.
Background:
Gay bars and nightclubs are sometimes segregated by sex. In some establishments, people who are perceived to be of the "wrong" sex (for example, a man attempting to enter a women's club) may be unwelcome or even barred from entry. This may be more common in specialty bars, such as gay male leather fetish or BDSM bars, or bars or clubs which have a strict dress code. It is also common in bars and clubs where sex on the premises is a primary focus of the establishment. On the other hand, gay bars are usually welcoming of transgender and cross-dressed people, and drag shows are a common feature in many gay bars, even men-only spaces. Some gay bars and clubs which have a predominantly male clientele, as well as some gay bathhouses and other sex clubs, may offer occasional women-only nights.
Background:
A few gay bars attempt to restrict entry to only gay or lesbian people, but in practice this is difficult to enforce. Most famously, Melbourne's Peel Hotel was granted an exemption from Australia's Equal Opportunities Act by a state tribunal, on the grounds that the exemption was needed to prevent "sexually-based insults and violence" aimed at the pub's patrons. As a result of the decision, the pub is legally able to advertise as a "gay only" establishment, and door staff can ask people whether they are gay before allowing them inside, and can turn away non-gay people.
Background:
Already categorized as gay or lesbian, many gay bars in larger cities/urban areas take this sub-categorization a step further by appealing to distinct subcultures within the gay community. Some of these sub-cultures are defined by costume and performance. These bars often forge a like-minded community in dozens of cities with leather gay bars, line-dancing gay bars, and drag revues. Other subcultures cater to men who fit a certain type, one that is often defined by age, body type, personality, and musical preference. There are some bars and clubs that cater more to a working class/blue collar crowd and some that cater to a more upscale clientele. There are gay bars that cater to "twinks" (young, smooth-bodied pretty boys) and others that cater to bears (older, larger, hairier alternatives to the well-manicured and fey gay stereotype). There are also gay bars that cater to certain races, such as ones for Asian men "and their admirers", Latin men, or black men.
Background:
Gay cruise bar A variation of the gay bar is the gay cruise bar. Normally gay bars usually prohibit sexual activity other than kissing or flirting on the premises, however cruise bars allow sex to happen on their property. Cruise bars have a secured entrance door so that only adults can enter, a cloakroom area to allow patrons to change, and seating that allow sexual activity to happen. There is usually an entrance change, however on special occasions it is waived. Mobile phones are banned for privacy reasons. Notable cruise bars include Vault 139 and Bunker Bar in London.
Music:
Music, either live or, more commonly, mixed by a disc jockey (DJ), is often a prominent feature of gay bars. Typically, the music in gay bars include pop, dance, contemporary R&B, house, trance, and techno. In larger North American cities and in Australia, one or more gay bars with a country music theme and line dancing are also common, as are bars known for retro 1960s pop and "Motown Sound."
List of gay bars:
This is not a complete list of gay bars around the world.
Argentina Amerika, Buenos AiresCanada Woody's, TorontoColombia Theatron, BogotáDenmark Pan Club Copenhagen (closed 2007)Finland DTM, Helsinki Hercules, HelsinkiIreland The George, Dublin Panti Bar, DublinNetherlands Café 't MandjePuerto Rico LoverbarThailand Sunee Plaza, a close collection of gay barsUnited Kingdom United States
List of lesbian bars:
While some gay bars open their doors to all LGBTQ people, other bars cater specifically to lesbians. In recent years many popular lesbian bars have closed down. In 2015, JD Samson made a documentary exploring the very few remaining lesbian bars in the United States.United States As You Are Bar, Washington, DC Babes of Carytown, Richmond, VA Blush & Blu, Denver, CO Chances Bar, Houston (closed) Henrietta Hudson, New York, NY Herz, Mobile, AL Maud's, San Francisco (closed) Peg's Place, San Francisco (closed) Phase 1, Washington, D.C. (closed) Slammers, Columbus, OH Sue Ellen's, Dallas, TX The Lexington Club, San Francisco, CA (closed) Toasted Walnut, Philadelphia, PA (closed) Wildrose, Seattle, WA Wild Side West, San Francisco, CA United Kingdom Candy Bar, Soho (closed) Gateways club (London) (closed) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gluten exorphin**
Gluten exorphin:
Gluten exorphins are a group of opioid peptides formed during the digestion of the gluten protein. These peptides work as external regulators for gastrointestinal movement and hormonal release. The breakdown of gliadin, a polymer of wheat proteins, creates amino acids that stop the gluten epitopes from entering the immune system to activate inflammatory reactions. During this process, gluten does not fully break down, thus increasing the presence of gluten exorphins. Because of this, researchers think this is what might lead to various diseases. Research shows the benefits of gluten- and casein-free diets for people with diseases and disorders connected to gluten exorphins. The mechanism behind this is still unknown. There is a possibility that gluten has deleterious effects on the human digestive system. When people are more susceptible to gluten and casein allergies, the weakened intestinal lining allows gluten exorphin to flow.
Categorization:
There are four known gluten exorphins with known structure: Gluten exorphin A5 Structure: H-Gly-Tyr-Tyr-Pro-Thr-OH Chemical formula: C29H37N5O9 Molecular weight: 599.64 g/mol Gluten exorphin B4 Structure: H-Tyr-Gly-Gly-Trp-OH Chemical formula: C24H27N5O6 Molecular weight: 481.50 g/mol Gluten exorphin B5 Structure: H-Tyr-Gly-Gly-Trp-Leu-OH Chemical formula: C30H38N6O7 Molecular weight: 594.66 g/mol Gluten exorphin C Structure: H-Tyr-Pro-Ile-Ser-Leu-OH Chemical formula: C29H45N5O8 Molecular weight: 591.70 g/mol
Clinical significance:
Recent research surrounding gluten exorphins has revolved around how the peptides might play a role in various diseases and disorders.
Celiac disease In response to gluten, people with celiac disease will release gluten exorphins as part of the allergic immune response. Due to the weakening of intestinal walls caused by celiac disease, some of these gluten exorphins can make their way through the lining of the intestines and are then absorbed into the bloodstream. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equative case**
Equative case:
Equative is a case prototypically expressing the standard of comparison of equal values ("as… as a …"). The equative case has been used in very few languages in history. It was used in the Sumerian language, where it also took on the semantic functions of the essive case ("in the capacity of…") and similative case ("like a…").For Sumerian, the equative was formed by adding the suffix -gin7 to the end of a noun phrase. In its similative function: For Ossetic it is formed by the ending -ау [aw]: It is found subdialectally in some speakers of the Khalkha dialect of Mongolian, where it is formed by the endings -цаа [tsaa], -цоо [tsoo], -цээ [tsee] or -цөө [tsöö], depending on the vowel harmony of the noun. It is quite rare and very specific, referring to the height or level of an object: It is also found in the Turkic Khalaj language and in languages from South America like Quechua, Aymara, Uro and Cholón.Welsh, though it has no equative case of nouns, has an equative degree of adjectives, shown normally by the suffix -ed: for example, "hyned" (â ...), meaning "as old" (as ...).Sireniki Eskimo had an equative (or comparative) case for describing similarities between nouns. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perian**
Perian:
Perian is a discontinued open-source QuickTime component that enabled Apple Inc.’s QuickTime to play several popular video formats not supported natively by QuickTime on macOS. It was a joint development of several earlier open source components based on the multiplatform FFmpeg project's libavcodec and libavformat, as well as liba52 and libmatroska.
It has been featured as the "Download of the Day" on Lifehacker, as well as on several popular blogs including Ars Technica and The Unofficial Apple Weblog.
Project shutdown:
On 15 May 2012, the Perian project managers announced on their website that they are shutting down support for the project. In the announcement, they recommended that users look to other products, such as Niceplayer, VLC or MPlayer OS X. They indicated that Perian's source code would be posted online for any developer who wanted to continue with the project. One continuation based on the source code is actively maintained but does not support QuickTime for OS X Mavericks or later.
Supported formats:
Perian lent support for many combinations of video, audio, text, and container formats to QuickTime, including the following: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Female**
Female:
An organism's sex is female (symbol: ♀) if it produces the ovum (egg cell), the type of gamete (sex cell) that fuses with the male gamete (sperm cell) during sexual reproduction.A female has larger gametes than a male. Females and males are results of the anisogamous reproduction system, wherein gametes are of different sizes (unlike isogamy where they are the same size). The exact mechanism of female gamete evolution remains unknown.
Female:
In species that have males and females, sex-determination may be based on either sex chromosomes, or environmental conditions. Most female mammals, including female humans, have two X chromosomes. Female characteristics vary between different species, with some species having pronounced secondary female sex characteristics, such as the presence of pronounced mammary glands in mammals.
In humans, the word female can also be used to refer to gender in the social sense of gender role or gender identity.
Etymology and usage:
The word female comes from the Latin femella, the diminutive form of femina, meaning "woman". It is not etymologically related to the word male, but in the late 14th century the English spelling was altered to parallel that of male. Female is also used as a noun meaning "a female organism", though describing women as females is often considered disparaging, as it makes no distinction between other animals and humans.Biological sex is conceptually distinct from gender, although they are often used interchangeably. The adjective female can describe a person's sex or gender identity.The word can also refer to the shape of connectors and fasteners, such as screws, electrical pins, and technical equipment. Under this convention, sockets and receptacles are called female, and the corresponding plugs male.
Defining characteristics:
Females produce ova, the larger gametes in a heterogamous reproduction system, while the smaller and usually motile gametes, the spermatozoa, are produced by males. Generally, a female cannot reproduce sexually without access to the gametes of a male, and vice versa, but in some species females can reproduce by themselves asexually, for example via parthenogenesis.Patterns of sexual reproduction include: Isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level), Anisogamous species with gametes of male and female types, Oogamous species, which include humans, in which the female gamete is much larger than the male and has no ability to move. Oogamy is a form of anisogamy. There is an argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.Other than the defining difference in the type of gamete produced, differences between males and females in one lineage cannot always be predicted by differences in another. The concept is not limited to animals; egg cells are produced by chytrids, diatoms, water moulds and land plants, among others. In land plants, female and male designate not only the egg- and sperm-producing organisms and structures, but also the structures of the sporophytes that give rise to male and female plants.
Females across species:
Species that are divided into females and males are classified as gonochoric in animals, as dioecious in seed plants and as dioicous in cryptogams.: 82 In some species, female and hermaphrodite individuals may coexist, a sexual system termed gynodioecy. In a few species, female individuals coexist with males and simultaneous hermaphrodites; this sexual system is called trioecy. In Thor manningi (a species of shrimp), females coexist with males and protandrous hermaphrodites.
Females across species:
Mammalian female A distinguishing characteristic of the class Mammalia is the presence of mammary glands. Mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birth. Only mammals produce milk. Mammary glands are obvious in humans, because the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breasts. Mammary glands are present in all mammals, although they are normally redundant in males of the species.Most mammalian females have two copies of the X chromosome, while males have only one X and one smaller Y chromosome; some mammals, such as the platypus, have different combinations. One of the female's X chromosomes is randomly inactivated in each cell of placental mammals while the paternally derived X is inactivated in marsupials. In birds and some reptiles, by contrast, it is the female which is heterozygous and carries a Z and a W chromosome while the male carries two Z chromosomes. In mammals, females can have XXX or X.Mammalian females bear live young, with the exception of monotreme females, which lay eggs. Some non-mammalian species, such as guppies, have analogous reproductive structures; and some other non-mammals, such as some sharks, also bear live young.In sex determination for mammals, female is the default sex, while in the poplar genus Populus the default is male.
Sex determination:
The sex of a particular organism may be determined by genetic or environmental factors, or may naturally change during the course of an organism's life.
Sex determination:
Genetic determination The sex of most mammals, including humans, is genetically determined by the XY sex-determination system where males have X and Y (as opposed to X and X) sex chromosomes. During reproduction, the male contributes either an X sperm or a Y sperm, while the female always contributes an X egg. A Y sperm and an X egg produce a male, while an X sperm and an X egg produce a female. The ZW sex-determination system, where males have ZZ (as opposed to ZW) sex chromosomes, is found in birds, reptiles and some insects and other organisms.
Sex determination:
Environmental determination The young of some species develop into one sex or the other depending on local environmental conditions, e.g. the sex of crocodilians is influenced by the temperature of their eggs. Other species (such as the goby) can transform, as adults, from one sex to the other in response to local reproductive conditions (such as a brief shortage of males).
Evolution:
The question of how females evolved is mainly a question of why males evolved. The first organisms reproduced asexually, usually via binary fission, wherein a cell splits itself in half. From a strict numbers perspective, a species that is half males/half females can produce half the offspring an asexual population can, because only the females are having offspring. Being male can also carry significant costs, such as in flashy sexual displays in animals (such as big antlers or colorful feathers), or needing to produce an outsized amount of pollen as a plant in order to get a chance to fertilize a female. Yet despites the costs of being male, there must be some advantage to the process.The advantages are explained by the evolution of anisogamy, which led to the evolution of male and female function. Before the evolution of anisogamy, mating types in a species were isogamous: the same size and both could move, catalogued only as "+" or "-" types.: 216 In anisogamy, the mating cells are called gametes. The female gamete is larger than the male gamete, and usually immotile. Anisogamy remains poorly understood, as there is no fossil record of its emergence. Numerous theories exist as to why anisogamy emerged. Many share a common thread, in that larger female gametes are more likely to survive, and that smaller male gametes are more likely to find other gametes because they can travel faster. Current models often fail to account for why isogamy remains in a few species. Anisogamy appears to have evolved multiple times from isogamy; for example female Volvocales (a type of green algae) evolved from the plus mating type.: 222 Although sexual evolution emerged at least 1.2 billion years ago, the lack of anisogamous fossil records make it hard to pinpoint when females evolved.Female sex organs (genitalia, in animals) have an extreme range of variation among species and even within species. The evolution of female genitalia remains poorly understood compared to male genitalia, reflecting a now-outdated belief that female genitalia are less varied than male genitalia, and thus less useful to study. The difficulty of reaching female genitalia has also complicated their study. New 3D technology has made female genital study simpler. Genitalia evolve very quickly. There are three main hypotheses as to what impacts female genital evolution: lock-and-key (genitals must fit together), cryptic female choice (females affect whether males can fertilize them), and sexual conflict (a sort of sexual arms race). There is also a hypothesis that female genital evolution is the result of pleiotropy, i.e. unrelated genes that are affected by environmental conditions like low food also affect genitals. This hypothesis is unlikely to apply to a significant number of species, but natural selection in general has some role in female genital evolution.
Symbol:
The symbol ♀ (Unicode: U+2640 Alt codes: Alt+12), a circle with a small cross underneath, is commonly used to represent females. Joseph Justus Scaliger once speculated that the symbol was associated with Venus, goddess of beauty because it resembles a bronze mirror with a handle, but modern scholars consider that fanciful, and the most established view is that the female and male symbols derive from contractions in Greek script of the Greek names of the planets Thouros (Mars) and Phosphoros (Venus). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human sperm competition**
Human sperm competition:
Sperm competition is a form of post-copulatory sexual selection whereby male ejaculates simultaneously physically compete to fertilize a single ovum. Sperm competition occurs between sperm from two or more rival males when they make an attempt to fertilize a female within a sufficiently short period of time. This results primarily as a consequence of polyandrous mating systems, or due to extra-pair copulations of females, which increases the chance of cuckoldry, in which the male mate raises a child that is not genetically related to him. Sperm competition among males has resulted in numerous physiological and psychological adaptations, including the relative size of testes, the size of the sperm midpiece, prudent sperm allocation, and behaviors relating to sexual coercion, however this is not without consequences: the production of large amounts of sperm is costly and therefore, researchers have predicted that males will produce larger amounts of semen when there is a perceived or known increase in sperm competition risk.Sperm competition is not exclusive to humans, and has been studied extensively in other primates, as well as throughout much of the animal kingdom. The differing rates of sperm competition among other primates indicates that sperm competition is highest in primates with multi-male breeding systems, and lowest in primates with single-male breeding systems. Compared to other animals, and primates in particular, humans show low-to-intermediate levels of sperm competition, suggesting that humans have a history of little selection pressure for sperm competition.
Physiological adaptations to sperm competition:
Physiological evidence, including testis size relative to body weight and the volume of sperm in ejaculations, suggests that humans have experienced a low-to-intermediate level of selection pressure for sperm competition in their evolutionary history. Nevertheless, there is a large body of research that explores the physiological adaptations males do have for sperm competition.
Testis size and body weight Evidence suggests that, among the great apes, relative testis size is associated with the breeding system of each primate species. In humans, testis size relative to body weight is intermediate between monogamous primates (such as gorillas) and promiscuous primates (such as chimpanzees), indicating an evolutionary history of moderate selection pressures for sperm competition.
Physiological adaptations to sperm competition:
Ejaculate volume The volume of sperm in ejaculates scales proportionately with testis size and, consistent with the intermediate weight of human males' testes, ejaculate volume is also intermediate between primates with high and low levels of sperm competition. Human males, like other animals, exhibit prudent sperm allocation, a physiological response to the high cost of sperm production as it relates to the actual or perceived risk of sperm competition at each insemination. In situations where the risk of sperm competition is higher, males will allocate more energy to producing higher ejaculate volumes. Studies have found that the volume of sperm does vary between ejaculates, and that sperm produced during copulatory ejaculations are of a higher quality (younger, more motile, etc.) than those sperm produced during masturbatory ejaculates or nocturnal emissions. This suggests that, at least within males, there is evidence of allocation of higher quality sperm production for copulatory purposes. Researchers have suggested that males produce more and higher quality sperm after spending time apart from their partners, implying that males are responding to an increased risk of sperm competition, although this view has been challenged in recent years. It is also possible that males may be producing larger volumes of sperm in response to actions from their partners, or it may be that males who produce larger volumes of sperm may be more likely to spend more time away from their partners.
Physiological adaptations to sperm competition:
Size of sperm midpiece The size of the sperm midpiece is determined in part by the volume of mitochondria in the sperm. Sperm midpiece size is tied to sperm competition in that individuals with a larger midpiece will have more mitochondria, and will thus have more highly motile sperm than those with a lower volume of mitochondria. Among humans, as with relative testis size and ejaculate volume, the size of the sperm midpiece is small compared to other primates, and is most similar in size to that of primates with low levels of sperm competition, supporting the theory that humans have had an evolutionary history of intermediate levels of sperm competition.
Physiological adaptations to sperm competition:
Penis anatomy Several features of the anatomy of the human penis are proposed to serve as adaptations to sperm competition, including the length of the penis and the shape of the penile head. By weight, the relative penis size of human males is similar to that of chimpanzees, although the overall length of the human penis is the largest among primates. It has been suggested by some authors that penis size is constrained by the size of the female reproductive tract (which, in turn is likely constrained by the availability of space in the female body), and that longer penises may have an advantage in depositing semen closer to the female cervix. Other studies have suggested that over our evolutionary history, the penis would have been conspicuous without clothing, and may have evolved its increased size due to female preference for longer penises.The shape of the glans and coronal ridge of the penis may function to displace semen from rival males, although displacement of semen is only observed when the penis is inserted a minimum of 75% of its length into the vagina. After allegations of female infidelity or separation from their partner, both men and women report that men thrusted the penis more deeply and more quickly into the vagina at the couple's next copulation.
Psychological adaptations to sperm competition:
In addition to physiological adaptations to sperm competition, men also have been shown to have psychological adaptations, including certain copulatory behaviors, behaviors relating to sexual coercion, investment in relationships, sexual arousal, performance of oral sex, and mate choice.
Psychological adaptations to sperm competition:
Copulatory behaviors Human males have several physiological adaptations that have evolved in response to pressures from sperm competition, such as the size and shape of the penis. In addition to the anatomy of male sex organs, men have several evolved copulatory behaviors that are proposed to displace rival male semen. For example, males who are at a higher risk of sperm competition (defined as having female partners with high reproductive value, such as being younger and physically attractive) engaged more frequently in semen-displacing behaviors during sexual intercourse than men who were at a lower risk of sperm competition. These semen-displacing behaviors include deeper and quicker thrusts, increased duration of intercourse, and an increased number of thrusts.
Psychological adaptations to sperm competition:
Sexual coercion and relationship investment Men who are more invested into a relationship have more to lose if their female partner is engaging in extra-pair copulations. This has led to the development of the cuckoldry risk hypothesis, which states that men who are at a higher risk of sperm competition due to female partner infidelity are more likely to sexually coerce their partners through threatening termination of the relationship, making their partners feel obligated to have sex, and other emotional manipulations of their partners, in addition to physically forcing partners to have sex. In forensic cases, it has been found that men who rape their partners experienced cuckoldry risk prior to raping their partners. Additionally, men who spend more time away from their partners are not only more likely to sexually coerce their partners, but they are also more likely to report that their partner is more attractive (as well as reporting that other men find her more attractive), in addition to reporting a greater interest in engaging in intercourse with her. Men who perceive that their female partners spend time with other men also are more likely to report that she is more interested in copulating with him.
Psychological adaptations to sperm competition:
Sexual arousal and sexual fantasies Sperm competition has also been proposed to influence men’s sexual fantasies and arousal. Some researchers have found that much pornography contains scenarios with high sperm competition, and it is more common to find pornography depicting one woman with multiple men than it is to find pornography depicting one man with multiple women, although this may be confounded by the fact that it is less expensive to hire male pornographic actors than female actors. Kilgallon and Simmons documented that men produce a higher percentage of motile sperm in their ejaculates after viewing sexually explicit images of two men and one woman (a sperm competition risk) than after viewing sexually explicit images of three women, likely indicating a response to an active risk of sperm competition.
Psychological adaptations to sperm competition:
Oral sex It is unknown whether or not men’s willingness and desire to perform oral sex on their female partners is an adaptation. Oral sex is not unique to humans, and it is proposed to serve a number of purposes relating to sperm competition risk. Some researchers have proposed that oral sex may serve to assess the reproductive health of a female partner and her fertility status, to increase her arousal, thereby reducing the likelihood of her having extra-pair copulations, to increase the arousal of the male to increase his semen quality, and thereby increase the likelihood of insemination, or to detect the presence of semen of other males in the vagina.
Psychological adaptations to sperm competition:
Mate choice Sperm competition risk also influences males' choice of female partners. Men prefer to have as low of a sperm competition risk as possible, and they therefore tend to choose short-term sexual partners who are not in a sexual relationship with other men. Women who are perceived as the most desirable short-term sexual partners are those who are not in a committed relationship and who also do not have casual sexual partners, while women who are in a committed long-term relationship are the least desirable partners. Following the above, women who are at an intermediate risk of sperm competition, that is women who are not in a long-term relationship but who do engage in short-term mating or have casual sexual partners, are considered intermediate in desirability for short-term sexual partners.
Effects of sperm competition on human mating strategies:
High levels of sperm competition among the great apes are generally seen among species with polyandrous (multimale) mating systems, while lower rates of competition are seen in species with monogamous or polygynous (multifemale) mating systems. Humans have low to intermediate levels of sperm competition, as seen by humans’ intermediate relative testis size, ejaculate volume, and sperm midpiece size, compared with other primates. This suggests that there has been a relatively high degree of monogamous or polygynous behavior throughout our evolutionary history. Additionally, the lack of a baculum in humans suggests a history of monogamous mating systems.
Effects of sperm competition on human mating strategies:
Males have the goal of reducing sperm competition by selecting women who are at low risk for sperm competition as the most ideal mating partners.
Intra-ejaculate sperm competition:
Noticing that sperm in a mixed sample tends to clump together--making it less mobile--and to have a high mortality rate, reproductive biologist Robin Baker, formerly of the University of Manchester, proposed about a decade ago that some mammals, including humans, manufacture "killer" sperm whose only function is to attack foreign spermatozoa, destroying themselves in the process.
Intra-ejaculate sperm competition:
To test this idea, reproductive biologist Harry Moore and evolutionary ecologist Tim Birkhead of the University of Sheffield in the U.K. mixed sperm samples from 15 men in various combinations and checked for how the cells moved, clumped together, or developed abnormal shapes. "These are very simple experiments, but we tried to mimic what goes on in the reproductive tract," Moore says. The team found no excess casualties from any particular donor or other evidence of warring sperm, they report in the 7 December Proceedings of the Royal Society. "The kamikaze sperm hypothesis is probably not a mechanism in human sperm competition," says Birkhead.
Intra-ejaculate sperm competition:
The findings are "the nail in the coffin for the kamikaze hypothesis," says Michael Bedford, a reproductive biologist at Cornell University's Weill Medical Center in New York City. He says he had never given the idea much credence.
Female responses to sperm competition:
A survey of 67 studies reporting nonpaternity suggests that for men with high paternity confidence rates of nonpaternity are (excluding studies of unknown methodology) typically 1.9%, substantially less than the typical rates of 10% or higher cited by many researchers. Cuckolded fathers are rare in human populations. "Media and popular scientific literature often claim that many alleged fathers are being cuckolded into raising children that biologically are not their own," said Maarten Larmuseau of KU Leuven in Belgium. "Surprisingly, the estimated rates within human populations are quite low--around 1 or 2 percent." "Reliable data on contemporary populations that have become available over the last decade, mainly as supplementary results of medical studies, don't support the notion that one in 10 people don't know who their "real" fathers are. The findings suggest that any potential advantage of cheating in order to have children that are perhaps better endowed is offset for the majority of women by the potential costs, the researchers say. Those costs likely include spousal aggression, divorce, or reduced paternal investment by the social partner or his relatives. The observed low cuckoldry rates in contemporary and past human populations challenge clearly the well-known idea that women routinely 'shop around' for good genes by engaging in extra-pair copulations to obtain genetic benefits for their children," Larmuseau said. Women are loyal to men who are good providers. "With DNA tests now widely available, so-called paternity fraud has become a staple of talk shows and TV crime series. Aggrieved men accuse tearful wives who profess their fidelity, only to have their extramarital affairs brought to light...The rule of thumb seems to be that males of higher socioeconomic status, and from more conventionally bourgeois societies, have greater warranted paternity confidence. Lower paternity confidence among those who are the principals for sensational media shouldn’t be surprising then."
Sperm competition in other primates:
The relative size of human male testes is comparable to those primates who have single-male (monogamous or polygynous) mating systems, such as gorillas and orangutans, while it is smaller when compared to primates with polyandrous mating systems, such as bonobos and chimpanzees. While it is possible that the large testis size of some primates could be due to seasonal breeding (and consequently a need to fertilize a large number of females in a short period of time), evidence suggests that primate groups with multi-male mating systems have significantly larger testes than do primates groups with single-male mating systems, regardless of whether that species exhibits seasonal breeding. Similarly, primate species with high levels of sperm competition also have larger ejaculate volumes and larger sperm midpieces.Unlike all other Old World great apes and monkeys, humans do not have a baculum (penile bone). Dixson demonstrated that increased baculum length is associated with primates who live in dispersed groups, while small bacula are found in primates who live in pairs. Those primates that have multi-male mating systems tend to have bacula that are larger in size, in addition to prolongation of post-ejaculatory intromission and larger relative testis size. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Numerical continuation**
Numerical continuation:
Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, 0.
The parameter λ is usually a real scalar, and the solution u an n-vector. For a fixed parameter value λ , F(∗,λ) maps Euclidean n-space into itself.
Often the original mapping F is from a Banach space into itself, and the Euclidean n-space is a finite-dimensional Banach space.
A steady state, or fixed point, of a parameterized family of flows or maps are of this form, and by discretizing trajectories of a flow or iterating a map, periodic orbits and heteroclinic orbits can also be posed as a solution of F=0
Other forms:
In some nonlinear systems, parameters are explicit. In others they are implicit, and the system of nonlinear equations is written F(u)=0 where u is an n-vector, and its image F(u) is an n-1 vector.
This formulation, without an explicit parameter space is not usually suitable for the formulations in the following sections, because they refer to parameterized autonomous nonlinear dynamical systems of the form: u′=F(u,λ).
However, in an algebraic system there is no distinction between unknowns u and the parameters.
Periodic motions:
A periodic motion is a closed curve in phase space. That is, for some period T ,u′=F(u,λ),u(0)=u(T).
The textbook example of a periodic motion is the undamped pendulum.
Periodic motions:
If the phase space is periodic in one or more coordinates, say u(t)=u(t+Ω) , with Ω a vector , then there is a second kind of periodic motions defined by u′=F(u,λ),u(0)=u(T+N.Ω) for every integer N The first step in writing an implicit system for a periodic motion is to move the period T from the boundary conditions to the ODE: u′=TF(u,λ),u(0)=u(1+N.Ω).
Periodic motions:
The second step is to add an additional equation, a phase constraint, that can be thought of as determining the period. This is necessary because any solution of the above boundary value problem can be shifted in time by an arbitrary amount (time does not appear in the defining equations—the dynamical system is called autonomous).
There are several choices for the phase constraint. If u0(t) is a known periodic orbit at a parameter value λ0 near λ , then, Poincaré used 0.
which states that u lies in a plane which is orthogonal to the tangent vector of the closed curve. This plane is called a Poincaré section.
For a general problem a better phase constraint is an integral constraint introduced by Eusebius Doedel, which chooses the phase so that the distance between the known and unknown orbits is minimized: 0.
Definitions:
Solution component A solution component Γ(u0,λ0) of the nonlinear system F is a set of points (u,λ) which satisfy F(u,λ)=0 and are connected to the initial solution (u0,λ0) by a path of solutions (u(s),λ(s)) for which (u(0),λ(0))=(u0,λ0),(u(1),λ(1))=(u,λ) and F(u(s),λ(s))=0 Numerical continuation A numerical continuation is an algorithm which takes as input a system of parametrized nonlinear equations and an initial solution (u0,λ0) , F(u0,λ0)=0 , and produces a set of points on the solution component Γ(u0,λ0) Regular point A regular point of F is a point (u,λ) at which the Jacobian of F is full rank (n) Near a regular point the solution component is an isolated curve passing through the regular point (the implicit function theorem). In the figure above the point (u0,λ0) is a regular point.
Definitions:
Singular point A singular point of F is a point (u,λ) at which the Jacobian of F is not full rank.
Near a singular point the solution component may not be an isolated curve passing through the regular point. The local structure is determined by higher derivatives of F . In the figure above the point where the two blue curves cross is a singular point.
In general solution components Γ are branched curves. The branch points are singular points. Finding the solution curves leaving a singular point is called branch switching, and uses techniques from bifurcation theory (singularity theory, catastrophe theory).
For finite-dimensional systems (as defined above) the Lyapunov-Schmidt decomposition may be used to produce two systems to which the Implicit Function Theorem applies. The Lyapunov-Schmidt decomposition uses the restriction of the system to the complement of the null space of the Jacobian and the range of the Jacobian.
Definitions:
If the columns of the matrix Φ are an orthonormal basis for the null space of J=[FxFλ] and the columns of the matrix Ψ are an orthonormal basis for the left null space of J , then the system F(x,λ)=0 can be rewritten as [(I−ΨΨT)F(x+Φξ+η)ΨTF(x+Φξ+η)]=0, where η is in the complement of the null space of J (ΦTη=0) In the first equation, which is parametrized by the null space of the Jacobian ( ξ ), the Jacobian with respect to η is non-singular. So the implicit function theorem states that there is a mapping η(ξ) such that η(0)=0 and (I−ΨΨT)F(x+Φξ+η(ξ))=0) . The second equation (with η(ξ) substituted) is called the bifurcation equation (though it may be a system of equations).
Definitions:
The bifurcation equation has a Taylor expansion which lacks the constant and linear terms. By scaling the equations and the null space of the Jacobian of the original system a system can be found with non-singular Jacobian. The constant term in the Taylor series of the scaled bifurcation equation is called the algebraic bifurcation equation, and the implicit function theorem applied the bifurcation equations states that for each isolated solution of the algebraic bifurcation equation there is a branch of solutions of the original problem which passes through the singular point.
Definitions:
Another type of singular point is a turning point bifurcation, or saddle-node bifurcation, where the direction of the parameter λ reverses as the curve is followed. The red curve in the figure above illustrates a turning point.
Particular algorithms:
Natural parameter continuation Most methods of solution of nonlinear systems of equations are iterative methods. For a particular parameter value λ0 a mapping is repeatedly applied to an initial guess u0 . If the method converges, and is consistent, then in the limit the iteration approaches a solution of F(u,λ0)=0 Natural parameter continuation is a very simple adaptation of the iterative solver to a parametrized problem. The solution at one value of λ is used as the initial guess for the solution at λ+Δλ . With Δλ sufficiently small the iteration applied to the initial guess should converge.
Particular algorithms:
One advantage of natural parameter continuation is that it uses the solution method for the problem as a black box. All that is required is that an initial solution be given (some solvers used to always start at a fixed initial guess). There has been a lot of work in the area of large scale continuation on applying more sophisticated algorithms to black box solvers (see e.g. LOCA).
Particular algorithms:
However, natural parameter continuation fails at turning points, where the branch of solutions turns round. So for problems with turning points, a more sophisticated method such as pseudo-arclength continuation must be used (see below).
Simplicial or piecewise linear continuation Simplicial Continuation, or Piecewise Linear Continuation (Allgower and Georg) is based on three basic results.
The first is The second result is: Please see the article on piecewise linear continuation for details.
Particular algorithms:
With these two operations this continuation algorithm is easy to state (although of course an efficient implementation requires a more sophisticated approach. See [B1]). An initial simplex is assumed to be given, from a reference simplicial decomposition of Rn . The initial simplex must have at least one face which contains a zero of the unique linear interpolant on that face. The other faces of the simplex are then tested, and typically there will be one additional face with an interior zero. The initial simplex is then replaced by the simplex which lies across either face containing zero, and the process is repeated.
Particular algorithms:
References: Allgower and Georg [B1] provides a crisp, clear description of the algotihm.
Particular algorithms:
Pseudo-arclength continuation This method is based on the observation that the "ideal" parameterization of a curve is arclength. Pseudo-arclength is an approximation of the arclength in the tangent space of the curve. The resulting modified natural continuation method makes a step in pseudo-arclength (rather than λ ). The iterative solver is required to find a point at the given pseudo-arclength, which requires appending an additional constraint (the pseudo-arclength constraint) to the n by n+1 Jacobian. It produces a square Jacobian, and if the stepsize is sufficiently small the modified Jacobian is full rank.
Particular algorithms:
Pseudo-arclength continuation was independently developed by Edward Riks and Gerald Wempner for finite element applications in the late 1960s, and published in journals in the early 1970s by H.B. Keller. A detailed account of these early developments is provided in the textbook by M. A. Crisfield: Nonlinear Finite Element Analysis of Solids and Structures, Vol 1: Basic Concepts, Wiley, 1991. Crisfield was one of the most active developers of this class of methods, which are by now standard procedures of commercial nonlinear finite element programs.
Particular algorithms:
The algorithm is a predictor-corrector method. The prediction step finds the point (in IR^(n+1) ) which is a step Δs along the tangent vector at the current pointer. The corrector is usually Newton's method, or some variant, to solve the nonlinear system F(u,λ)=0u˙0∗(u−u0)+λ˙0(λ−λ0)=Δs where (u˙0,λ˙0) is the tangent vector at (u0,λ0) The Jacobian of this system is the bordered matrix [FuFλu˙∗λ˙] At regular points, where the unmodified Jacobian is full rank, the tangent vector spans the null space of the top row of this new Jacobian. Appending the tangent vector as the last row can be seen as determining the coefficient of the null vector in the general solution of the Newton system (particular solution plus an arbitrary multiple of the null vector).
Particular algorithms:
Gauss–Newton continuation This method is a variant of pseudo-arclength continuation. Instead of using the tangent at the initial point in the arclength constraint, the tangent at the current solution is used. This is equivalent to using the pseudo-inverse of the Jacobian in Newton's method, and allows longer steps to be made. [B17]
Continuation in more than one parameter:
The parameter λ in the algorithms described above is a real scalar. Most physical and design problems generally have many more than one parameter. Higher-dimensional continuation refers to the case when λ is a k-vector.
The same terminology applies. A regular solution is a solution at which the Jacobian is full rank (n) . A singular solution is a solution at which the Jacobian is less than full rank.
A regular solution lies on a k-dimensional surface, which can be parameterized by a point in the tangent space (the null space of the Jacobian). This is again a straightforward application of the Implicit Function Theorem.
Applications of numerical continuation techniques:
Numerical continuation techniques have found a great degree of acceptance in the study of chaotic dynamical systems and various other systems which belong to the realm of catastrophe theory. The reason for such usage stems from the fact that various non-linear dynamical systems behave in a deterministic and predictable manner within a range of parameters which are included in the equations of the system. However, for a certain parameter value the system starts behaving chaotically and hence it became necessary to follow the parameter in order to be able to decipher the occurrences of when the system starts being non-predictable, and what exactly (theoretically) makes the system become unstable.
Applications of numerical continuation techniques:
Analysis of parameter continuation can lead to more insights about stable/critical point bifurcations. Study of saddle-node, transcritical, pitch-fork, period doubling, Hopf, secondary Hopf (Neimark) bifurcations of stable solutions allows for a theoretical discussion of the circumstances and occurrences which arise at the critical points. Parameter continuation also gives a more dependable system to analyze a dynamical system as it is more stable than more interactive, time-stepped numerical solutions. Especially in cases where the dynamical system is prone to blow-up at certain parameter values (or combination of values for multiple parameters).It is extremely insightful as to the presence of stable solutions (attracting or repelling) in the study of Nonlinear Partial Differential Equations where time stepping in the form of the Crank Nicolson algorithm is extremely time consuming as well as unstable in cases of nonlinear growth of the dependent variables in the system. The study of turbulence is another field where the Numerical Continuation techniques have been used to study the advent of turbulence in a system starting at low Reynolds numbers. Also, research using these techniques has provided the possibility of finding stable manifolds and bifurcations to invariant-tori in the case of the restricted three-body problem in Newtonian gravity and have also given interesting and deep insights into the behaviour of systems such as the Lorenz equations.
Software:
(Under Construction) See also The SIAM Activity Group on Dynamical Systems' list http://www.dynamicalsystems.org/sw/sw/ AUTO: Computation of the solutions of Two Point Boundary Value Problems (TPBVPs) with integral constraints. https://sourceforge.net/projects/auto-07p/ Available on SourceForge.
HOMCONT: Computation of homoclinic and heteroclinic orbits. Included in AUTO MATCONT: Matlab toolbox for numerical continuation and bifurcation [1]Available on SourceForge.
Software:
DDEBIFTOOL: Computation of solutions of Delay Differential Equations. A MATLAB package. Available from K. U. Leuven PyCont: A Python toolbox for numerical continuation and bifurcation. Native Python algorithms for fixed point continuation, sophisticated interface to AUTO for other types of problem. Included as part of PyDSTool CANDYS/QA: Available from the Universität Potsdam [A16] MANPAK: Available from Netlib [A15] PDDE-CONT: http://seis.bris.ac.uk/~rs1909/pdde/ multifario: http://multifario.sourceforge.net/ LOCA: https://trilinos.org/packages/nox-and-loca/ DSTool GAIO OSCILL8: Oscill8 is a dynamical systems tool that allows a user to explore high-dimensional parameter space of nonlinear ODEs using bifurcation analytic techniques. Available from SourceForge.
Software:
MANLAB : Computation of equilibrium, periodic and quasi-periodic solution of differential equations using Fourier series (harmonic balance method) developments of the solution and Taylor series developments (asymptotic numerical method) of the solution branch. Available from LMA Marseille.
BifurcationKit.jl : This Julia package aims at performing automatic bifurcation analysis of large dimensional equations F(u,λ)=0 where λ∈ℝ by taking advantage of iterative methods, sparse formulation and specific hardwares (e.g. GPU). [2]
Examples:
This problem, of finding the points which F maps into the origin appears in computer graphics as the problems of drawing contour maps (n=2), or isosurface(n=3). The contour with value h is the set of all solution components of F-h=0 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Structural level**
Structural level:
In Schenkerian analysis, a structural level is a representation of a piece of music at a different level of abstraction, with levels typically including foreground, middleground, and background. According to Schenker musical form is "an energy transformation, as a transformation of the forces that flow from background to foreground through the levels."For example, while details such as melodic notes exist at the lowest structural levels, the foreground, in the background the fundamental structure is the most basic structural level of all tonal music, representing the digression from and necessary return to the tonic that motivates musical form. It may be conceived of in a specific piece as the opening in the tonic and the return to the tonic with a perfect authentic cadence (V-I) after the development of sonata allegro form.
Structural level:
Strata is the translation given by John Rothgeb for Schichten ("Levels") as described by Oswald Jonas in his Introduction to the Theory of Heinrich Schenker. This translation did not gain wide acceptance in modern Schenkerian literature and the translation of Schichten as "levels" usually has been preferred. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Klosneuvirus**
Klosneuvirus:
Klosneuvirus (KNV, also KloV) is a new type of giant virus found by the analysis of low-complexity metagenomes from a wastewater treatment plant in Klosterneuburg, Austria. It has a 1.57-Mb genome coding unusually high number of genes typically found in cellular organisms, including aminoacyl transfer RNA synthetases with specificities for 19 different amino acids, over 10 translation factors and several tRNA-modifying enzymes. Klosneuvirus, Indivirus, Catovirus and Hokovirus, are part of a group of giant viruses denoted as Klosneuviruses or Klosneuvirinae, a proposed subfamily of the Mimiviridae.
Klosneuvirus:
Species in this clade include Bodo saltans virus infecting the kinetoplastid Bodo saltans.Phylogenetic tree topology of Mimiviridae is still under discussion. As Klosneuviruses are related to Mimivirus, it was proposed to put them all together into a subfamily Megavirinae. Other authors (CNS 2018) like to put Klosneuviruses just together with Cafeteria roenbergensis virus (CroV) and Bodo saltans virus (BsV) into a tentative subfamily called Aquavirinae. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autosomal dominant Charcot-Marie-Tooth disease type 2 with giant axons**
Autosomal dominant Charcot-Marie-Tooth disease type 2 with giant axons:
Autosomal dominant Charcot–Marie–Tooth disease type 2 with giant axons is a rare subtype of hereditary motor and sensory neuropathy of the axons which is characterized by symptoms similar to those from Charcot–Marie–Tooth disease and autosomal dominant inheritance.
Signs and symptoms:
This condition is characterized by the wasting and consequent weakness of the muscles in the distal extremities of the limbs with accompanying loss of sensory sensation of said limb extremities, early-onset high foot arch, and swelling of the nerve axons with neurofilament accumulation. Additional findings include gait anomalies, muscle cramps, toe anomalies, mild cardiomyopathy, and hypo/arreflexia.
Complications:
Although this condition doesn't usually progress, the cardiomyopathy that comes along with some cases can be deadly.
Causes:
This condition is caused by autosomal dominant missense mutations in the DCAF8 gene, located in the long arm of chromosome 1.This mutation was found in a family previously reported in medical literature (by Vogel et al.) who was later examined by Klein et al. Through in-vitro funcional expression assays done in HEK293 cells, it was found that the mutant R317C protein significantly decreases the binding of DCAF8 to DDB1, which ended up negatively impacting the recruitment of the E3 ubiquitin ligase complex.
Diagnosis:
This condition can be diagnosed by using methods (mainly) such as genetic testing, physical examination and nerve biopsy.
Epidemiology:
According to OMIM, more than 10 cases (probably closer to 20) have been described in medical literature: 9 members from a 5-generation German family and an Italian family whose number of affected members isn't specified.It's estimated prevalence is less than 1 case per million people. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Weathering steel**
Weathering steel:
Weathering steel, often referred to by the genericised trademark COR-TEN steel and sometimes written without the hyphen as corten steel, is a group of steel alloys which were developed to eliminate the need for painting, and form a stable rust-like appearance after several years' exposure to weather.
U.S. Steel (USS) holds the registered trademark on the name COR-TEN. The name COR-TEN refers to the two distinguishing properties of this type of steel: corrosion resistance and tensile strength. Although USS sold its discrete plate business to International Steel Group (now ArcelorMittal) in 2003, it still sells COR-TEN branded material in strip-mill plate and sheet forms.
The original COR-TEN received the standard designation A242 (COR-TEN A) from the ASTM International standards group. Newer ASTM grades are A588 (COR-TEN B) and A606 for thin sheet. All alloys are in common production and use.
The surface oxidation of weathering steel takes six months, but surface treatments can accelerate the oxidation of weathering steel to as little as one hour.
History:
The history of weathering steels began in the US in 1910s, when steels alloyed with different amounts of copper were exposed to the elements; the research continued into the 1920s and ca. 1926 it was discovered that phosphorus content also helps with the corrosion resistance.In 1933 the United States Steel Corporation decided to commercialize the results of their studies and patented a steel with exceptional mechanical resistance, primarily for use in railroad hopper cars, for the handling of heavy bulk loads including coal, metal ores, other mineral products and grain. The controlled corrosion for which this material is now best known was a welcome benefit discovered soon after, prompting USS to apply the trademarked name Cor-Ten. Because of its inherent toughness, this steel is still used extensively for bulk transport, intermodal shipping containers and bulk storage.Railroad passenger cars were also being built with Cor-Ten, albeit painted, by Pullman-Standard for the Southern Pacific from 1936, continuing through commuter coaches for the Rock Island Line in 1949.
History:
In 1964, the Moorestown Interchange was built over New Jersey Turnpike at milepost 37.02. This overpass is believed to be the first highway structure application of weathering steel. Other states including Iowa, Ohio, and Michigan followed soon after. Those were followed by University of York Footbridge in the United Kingdom in 1967. Since then, the practice of using weathering steel in bridges has expanded to many countries.
Properties:
Weathering refers to the chemical composition of these steels, allowing them to exhibit increased resistance to atmospheric corrosion compared to other steels. This is because the steel forms a protective layer on its surface under the influence of the weather.
The corrosion-retarding effect of the protective layer is produced by the particular distribution and concentration of alloying elements in it. It is not yet clear how exactly the patina formation differs from usual rusting, but it's established that drying of the wetted surface is necessary and that copper is the most important alloying element.
The layer protecting the surface develops and regenerates continuously when subjected to the influence of the weather. In other words, the steel is allowed to rust in order to form the protective coating.
The mechanical properties of weathering steels depend on which alloy and how thick the material is.
Properties:
ASTM A242 The original A242 alloy has a yield strength of 50 kilopounds per square inch (340 MPa) and ultimate tensile strength of 70 ksi (480 MPa) for light-medium rolled shapes and plates up to 0.75 inches (19 mm) thick. It has yield strength of 46 ksi (320 MPa) and ultimate strength of 67 ksi (460 MPa) for medium weight rolled shapes and plates from 0.75–1 inch (19–25 mm) thick. The thickest rolled sections and plates – from 1.5–4 in (38–102 mm) thick have yield strength of 42 ksi (290 MPa) and ultimate strength of 63 ksi (430 MPa). ASTM A242 is available in Type 1 and Type 2. Both have different applications based on the thickness. Type 1 is often used in housing structures, construction industry and freight cars. The Type 2 steel, which is also called Corten B, is used primarily in urban furnishing, passenger ships or cranes.
Properties:
ASTM A588 A588 has a yield strength of at least 50 ksi (340 MPa), and ultimate tensile strength of 70 ksi (480 MPa) for all rolled shapes and plate thicknesses up to 4 in (100 mm) thick. Plates from 4–5 in (102–127 mm) have yield strength at least 46 ksi (320 MPa) and ultimate tensile strength at least 67 ksi (460 MPa), and plates from 5–8 in (127–203 mm) thick have yield strength at least 42 ksi (290 MPa) and ultimate tensile strength at least 63 ksi (430 MPa).
Uses:
Weathering steel is popularly used in outdoor sculptures for its distressed antique appearance. One example is the large Chicago Picasso sculpture, which stands in the plaza of the Daley Center Courthouse in Chicago, which is also constructed of weathering steel. Other examples include Barnett Newman's Broken Obelisk; several of Robert Indiana's Numbers sculptures and his original Love sculpture; numerous works by Richard Serra; the Alamo sculpture in Manhattan, NY; the Barclays Center, Brooklyn, New York; the Angel of the North, Gateshead; and Broadcasting Tower at Leeds Beckett University.It is also used in bridge and other large structural applications such as the New River Gorge Bridge, the second span of the Newburgh–Beacon Bridge (1980), and the creation of the Australian Centre for Contemporary Art (ACCA) and MONA.
Uses:
It is very widely used in marine transportation, in the construction of intermodal containers as well as visible sheet piling along recently widened sections of London's M25 motorway.
Uses:
The first use of weathering steel for architectural applications was the John Deere World Headquarters in Moline, Illinois. The building was designed by architect Eero Saarinen, and completed in 1964. The main buildings of Odense University (built 1971–1976), designed by Knud Holscher and Jørgen Vesterholt, are clad in weathering steel, earning them the nickname Rustenborg (Danish for "rusty fortress"). In 1977, Robert Indiana created a Hebrew version of the Love sculpture made from weathering steel using the four-letter word ahava (אהבה, "love" in Hebrew) for the Israel Museum Art Garden in Jerusalem, Israel. In Denmark, all masts for supporting the catenary on electrified railways are made of weathering steel for aesthetic reasons.
Uses:
Weathering steel was used in 1971 for the Highliner electric cars built by the St. Louis Car Company for Illinois Central Railroad. The use of weathering steel was seen as a cost-cutting move in comparison with the contemporary railcar standard of stainless steel. A subsequent order in 1979 was built to similar specs, including weathering steel bodies, by Bombardier. The cars were painted, a standard practice for weathering steel railcars. The durability of weathering steel did not live up to expectations, with rust holes appearing in the railcars. Painting may have contributed to the problem, as painted weathering steel is no more corrosion-resistant than conventional steel, because the protective patina will not form in time to prevent corrosion over a localized area of attack such as a small paint failure. These cars were retired by 2016.Weathering steel was used to build the exterior of Barclays Center, made up of 12,000 pre-weathered steel panels engineered by ASI Limited & SHoP Construction. The New York Times says of the material: "While it can look suspiciously unfinished to the casual observer, it has many fans in the world of art and architecture."
Disadvantages:
Using weathering steel in construction presents several challenges. Ensuring that weld-points weather at the same rate as the other materials may require special welding techniques or material. Weathering steel is not rust-proof in itself: if water is allowed to accumulate on the surface of the steel, it will experience a higher corrosion rate, so provision for drainage must be made. Weathering steel is sensitive to humid subtropical climates, and in such environments it is possible that the protective patina may not stabilize but instead continue to corrode. For example, the former Omni Coliseum, built in 1972 in Atlanta, never stopped rusting, and eventually large holes appeared in the structure. This was a major factor in the decision to demolish it just 25 years after construction. The same thing can happen in environments laden with sea salt. Hawaii's Aloha Stadium, built in 1975, is one example of this. Weathering steel's normal surface weathering can also lead to rust stains on nearby surfaces.
Disadvantages:
The rate at which some weathering steels form the desired patina varies strongly with the presence of atmospheric pollutants which catalyze corrosion. While the process is generally successful in large urban centers, the weathering rate is much slower in more rural environments. Uris Hall, a social sciences building on Cornell University's main campus in Ithaca, a small town in Upstate New York, did not achieve the predicted surface finish on its Bethlehem Steel Mayari-R weathering steel framing within the predicted time. Rainwater runoff from the slowly rusting steel stained the numerous large windows and increased maintenance costs. Corrosion without the formation of a protective layer apparently led to the need for emergency structural reinforcement and galvanizing in 1974, less than two years after opening.The U.S. Steel Tower in Pittsburgh, Pennsylvania, was constructed by U.S. Steel in part to showcase COR-TEN steel. The initial weathering of the material resulted in a discoloration, known as "bleeding" or "runoff", of the surrounding city sidewalks and nearby buildings. A cleanup effort was orchestrated by the corporation once weathering was complete to clean the markings. A few of the nearby sidewalks were left uncleaned, and remain a rust color. This problem has been reduced in newer formulations of weathering steel. Staining can be prevented if the structure can be designed so that water does not drain from the steel onto concrete where stains would be visible. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**John L. Synge Award**
John L. Synge Award:
John L. Synge Award is an award by the Royal Society of Canada for outstanding research in any branch of the mathematical sciences. It was created in 1986 and is given at irregular intervals. The award is named in honor of John Lighton Synge.
Winners:
Source: Royal Society of Canada 2022 – Kevin Costello, FRS 2021 – Paul McNicholas 2020 – Christian Genest, FRSC 2018 – Bojan Mohar, FRSC 2014 – Bálint Virág 2008 – Henri Darmon, FRSC 2006 – Stephen Cook, FRSC 1999 – George A. Elliott, FRSC 1996 – Joel Feldman, FRSC 1993 – Israel Michael Sigal, FRSC 1987 – James G. Arthur, FRSC | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eight disciplines problem solving**
Eight disciplines problem solving:
Eight Disciplines Methodology (8D) is a method or model developed at Ford Motor Company used to approach and to resolve problems, typically employed by quality engineers or other professionals. Focused on product and process improvement, its purpose is to identify, correct, and eliminate recurring problems. It establishes a permanent corrective action based on statistical analysis of the problem and on the origin of the problem by determining the root causes. Although it originally comprised eight stages, or 'disciplines', it was later augmented by an initial planning stage. 8D follows the logic of the PDCA cycle. The disciplines are: D0: Preparation and Emergency Response Actions: Plan for solving the problem and determine the prerequisites. Provide emergency response actions.
Eight disciplines problem solving:
D1: Use a Team: Establish a team of people with product/process knowledge. Teammates provide new perspectives and different ideas when it comes to problem solving.
D2: Describe the Problem: Specify the problem by identifying in quantifiable terms the who, what, where, when, why, how, and how many (5W2H) for the problem.
D3: Develop Interim Containment Plan: Define and implement containment actions to isolate the problem from any customer.
D4: Determine and Verify Root Causes and Escape Points: Identify all applicable causes that could explain why the problem has occurred. Also identify why the problem was not noticed at the time it occurred. All causes shall be verified or proved. One can use five whys or Ishikawa diagrams to map causes against the effect or problem identified.
D5: Verify Permanent Corrections (PCs) for Problem that will resolve the problem for the customer: Using pre-production programs, quantitatively confirm that the selected correction will resolve the problem. (Verify that the correction will actually solve the problem).
D6: Define and Implement Corrective Actions: Define and implement the best corrective actions. Also, validate corrective actions with empirical evidence of improvement.
D7: Prevent Recurrence / System Problems: Modify the management systems, operation systems, practices, and procedures to prevent recurrence of this and similar problems.
D8: Congratulate the Main Contributors to your Team: Recognize the collective efforts of the team. The team needs to be formally thanked by the organization.8Ds has become a standard in the automotive, assembly, and other industries that require a thorough structured problem-solving process using a team approach.
Ford Motor Company's team-oriented problem solving:
The executives of the Powertrain Organization (transmissions, chassis, engines) wanted a methodology where teams (design engineering, manufacturing engineering, and production) could work on recurring chronic problems. In 1986, the assignment was given to develop a manual and a subsequent course that would achieve a new approach to solving identified engineering design and manufacturing problems. The manual for this methodology was documented and defined in Team Oriented Problem Solving (TOPS), first published in 1987. The manual and subsequent course material were piloted at Ford World Headquarters in Dearborn, Michigan. Ford refers to their current variant as G8D (Global 8D). The Ford 8Ds manual is extensive and covers chapter by chapter how to go about addressing, quantifying, and resolving engineering issues. It begins with a cross-functional team and concludes with a successful demonstrated resolution of the problem. Containment actions may or may not be needed based on where the problem occurred in the life cycle of the product.
Usage:
Many disciplines are typically involved in the "8Ds" methodology. The tools used can be found in textbooks and reference materials used by quality assurance professionals. For example, an "Is/Is Not" worksheet is a common tool employed at D2, and Ishikawa, or "fishbone," diagrams and "5-why analysis" are common tools employed at step D4.
Usage:
In the late 1990s, Ford developed a revised version of the 8D process that they call "Global 8D" (G8D), which is the current global standard for Ford and many other companies in the automotive supply chain. The major revisions to the process are as follows: Addition of a D0 (D-Zero) step as a gateway to the process. At D0, the team documents the symptoms that initiated the effort along with any emergency response actions (ERAs) that were taken before formal initiation of the G8D. D0 also incorporates standard assessing questions meant to determine whether a full G8D is required. The assessing questions are meant to ensure that in a world of limited problem-solving resources, the efforts required for a full team-based problem-solving effort are limited to those problems that warrant these resources.
Usage:
Addition of the notion of escape points to D4 through D6. An 'escape point' is the earliest control point in the control system following the root cause of a problem that should have detected that problem but failed to do so. The idea here is to consider not only the root cause, but also what went wrong with the control system in allowing this problem to escape. Global 8D requires the team to identify and verify an escape point at D4. Then, through D5 and D6, the process requires the team to choose, verify, implement, and validate permanent corrective actions to address the escape point.Recently, the 8D process has been employed significantly outside the auto industry. As part of lean initiatives and continuous-improvement processes it is employed extensively in the food manufacturing, health care, and high-tech manufacturing industries.
Benefits:
The benefits of the 8D methodology include effective approaches to finding a root cause, developing proper actions to eliminate root causes, and implementing the permanent corrective action. The 8D methodology also helps to explore the control systems that allowed the problem to escape. The Escape Point is studied for the purpose of improving the ability of the Control System to detect the failure or cause when and if it should occur again.
Benefits:
Finally the Prevention Loop explores the systems that permitted the condition that allowed the Failure and Cause Mechanism to exist in the first place.
Prerequisites:
Requires training in the 8D problem-solving process as well as appropriate data collection and analysis tools such as Pareto charts, fishbone diagrams, and process maps.
Problem solving tools:
The following tools can be used within 8D: Ishikawa diagrams also known as cause-and-effect or fishbone diagrams Pareto charts or Pareto diagrams 5 Whys 5W and 2H (who, what, where, when, why, how, how many or how much) Statistical process control Scatter plots Design of experiments Check sheet Histograms FMEA Flowcharts or process maps
Background of common corrective actions to dispose of nonconforming items:
The 8D methodology was first described in a Ford manual in 1987. The manual describes the eight-step methodology to address chronic product and process problems. The 8Ds included several concepts of effective problem solving, including taking corrective actions and containing nonconforming items. These two steps have been very common in most manufacturing facilities, including government and military installations. In 1974, the U.S. Department of Defense (DOD) released “MIL-STD 1520 Corrective Action and Disposition System for Nonconforming Material”. This 13 page standard defines establishing some corrective actions and then taking containment actions on nonconforming material or items. It is focused on inspection for defects and disposing of them. The basic idea of corrective actions and containment of defectives was officially abolished in 1995, but these concepts were also common to Ford Motor Company, a major supplier to the government in World War II. Corrective actions and containment of poor quality parts were part of the manual and course for the automotive industry and are well known to many companies. Ford's 60 page manual covers details associated with each step in their 8D problem solving manual and the actions to take to deal with identified problems.
Background of common corrective actions to dispose of nonconforming items:
Military usage The exact history of the 8D method remains disputed as many publications and websites state that it originates from the US military. Indeed, MIL-STD-1520C outlines a set of requirements for their contractors on how they should organize themselves with respect to non-conforming materials. Developed in 1974 and cancelled in February 1995 as part of the Perry memo, you can compare it best to the ISO 9001 standard that currently exists as it expresses the same philosophy. The aforementioned military standard does outline some aspects that are in the 8D method, however, it does not provide the same structure that the 8D methodology offers. Taking into account the fact that the Ford Motor Company played an instrumental role in producing army vehicles during the Second World War and in the decades after, it could very well be the case that the MIL-STD-1520C stood as a model for today's 8D method.
Relationship between 8D and FMEA:
FMEA (failure mode and effect analysis) is a tool generally used in the planning of product or process design. The relationships between 8D and FMEA are outlined below: The problem statements and descriptions are sometimes linked between both documents. An 8D can utilize pre-brainstormed information from a FMEA to assist in looking for potential problems.
Possible causes in a FMEA can immediately be used to jump start 8D Fishbone or Ishikawa diagrams. Brainstorming information that is already known is not a good use of time or resources.
Data and brainstorming collected during an 8D can be placed into a FMEA for future planning of new product or process quality. This allows a FMEA to consider actual failures, occurring as failure modes and causes, becoming more effective and complete.
Relationship between 8D and FMEA:
The design or process controls in a FMEA can be used in verifying the root cause and Permanent Corrective Action in an 8D.The FMEA and 8D should reconcile each failure and cause by cross documenting failure modes, problem statements and possible causes. Each FMEA can be used as a database of possible causes of failure as an 8D is developed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Channel 51**
Channel 51:
Channel 51 may refer to several television stations: Channel 51 (New Zealand), a regional television station based in Napier, Hawke's Bay, New Zealand
Canada:
The following television stations broadcast on digital channel 51 (UHF frequencies covering 693.25-697.75 MHz) in Canada: CBWFT-DT in Winnipeg, Manitoba CHCH-DT-2 in London, OntarioThe following television stations operate on virtual channel 51 in Canada: CHCH-DT-2 in London, Ontario CKEM-DT in Edmonton, Alberta | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of Java and Android API**
Comparison of Java and Android API:
This article compares the application programming interfaces (APIs) and virtual machines (VMs) of the programming language Java and operating system Android.
Comparison of Java and Android API:
While most Android applications are written in Java-like language, there are some differences between the Java API and the Android API, and Android does not run Java bytecode by a traditional Java virtual machine (JVM), but instead by a Dalvik virtual machine in older versions of Android, and an Android Runtime (ART) in newer versions, that compile the same code that Dalvik runs to Executable and Linkable Format (ELF) executables containing machine code.
Comparison of Java and Android API:
Java bytecode in Java Archive (JAR) files is not executed by Android devices. Instead, Java classes are compiled into a proprietary bytecode format and run on Dalvik (or compiled version thereof with newer ART), a specialized virtual machine (VM) designed for Android. Unlike Java VMs, which are stack machines (stack-based architecture), the Dalvik VM is a register machine (register-based architecture).
Comparison of Java and Android API:
Dalvik has some traits that differentiate it from other standard VMs: The VM was designed to use less space.
The constant pool has been modified to use only 32-bit indexes to simplify the interpreter.
Comparison of Java and Android API:
Standard Java bytecode executes 8-bit stack instructions. Local variables must be copied to or from the operand stack by separate instructions. Dalvik instead uses its own 16-bit instruction set that works directly on local variables. The local variable is commonly picked by a 4-bit virtual register field.Because the bytecode loaded by the Dalvik virtual machine is not Java bytecode and due to the way Dalvik loads classes, it is impossible to load library packages as jar files. A different procedure must be used to load Android libraries, in which the content of the underlying dex file must be copied in the application private internal storage area before it is loaded.
System properties:
As is the case for the Java SE class System, the Android System class allows retrieving system properties. However, some mandatory properties defined with the Java virtual machine have no meaning or a different meaning on Android. For example: java.version property returns 0 because it is not used on Android.
java.specification.version invariably returns 0.9 independently of the version of Android used.
java.class.version invariably returns 50 independently of the version of Android used.
user.dir has a different meaning on Android.
user.home and user.name properties do not exist on Android.
Class library:
Current versions of Android use the latest Java language and its libraries (but not full graphical user interface (GUI) frameworks), not the Apache Harmony Java implementation, that older versions used. Java 8 source code that works in latest version of Android, can be made to work in older versions of Android.
java.lang package By default, the default output stream System.out and System.err do not output anything, and developers are encouraged to use the Log class, which logs Strings on the LogCat tool. This has changed at least from HoneyComb, and they now output to the log console also.
Graphics and widget library Android does not use the Abstract Window Toolkit nor the Swing library. User interfaces are built using View objects. Android uses a framework similar to Swing, based on Views rather than JComponents. However, Android widgets are not JavaBeans: the Android application Context must be provided to the widget at creation.
Look and feel Android widget library does not support a pluggable look and feel architecture. The look and feel of Android widgets must be embedded in the widgets. However, a limited ability exists to set styles and themes for an application.
Layout manager Contrary to Swing where layout managers can be applied to any container widget, Android layout behavior is encoded in the containers.
java.beans package Android includes only a small subset of the java.beans package (PropertyChangeEvent and related classes). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kepler-277b**
Kepler-277b:
Kepler-277b (also known by its Kepler Objects of Interest designation KOI-1215.01) is the second most massive and third-largest rocky planet ever discovered, with a mass close to that of Saturn. Discovered in 2014 by the Kepler Space Telescope, Kepler-277b is a sub-Neptune sized exoplanet with a very high mass and density for an object of its radius, suggesting a composition made mainly of rock and iron. Along with its sister planet, Kepler-277c, the planet's mass was determined using transit-timing variations (TTVs).
Characteristics:
Size and temperature Kepler-277b was detected using the transit method and TTVs, allowing for both its mass and radius to be determined to some level. It is approximately 2.92 REarth, between the size of Earth and Neptune. At that radius, most planets should be gaseous Mini-Neptunes with no solid surface. However, the mass of Kepler-277b is extremely high for its size. Transit-timing variations indicate a planetary mass of about 87.3 MEarth, comparable to Saturn's mass at 95.16 MEarth. The planet has a density of approximately 19.3 g/cm3 and about 10.4 times the surface gravity of Earth. Such a high density for an object of this size implies that, like its sister planet, Kepler-277b is an enormous rock-based planet. It is currently the second most massive and third largest terrestrial planet ever discovered, behind Kepler-277c in radius and PSR J1719−1438 b in both radius and mass. Due to its proximity to its host star, Kepler-277b is quite hot with an equilibrium temperature of about 924 K (651 °C; 1,204 °F), hot enough to melt certain metals.
Characteristics:
Internal structure and composition Models of Kepler-277b's internal structure suggest that it has a very large iron core with an estimated radius of 2.435 REarth. The core predominantly consists of an allotrope of iron with a face-centered cubic (FCC) crystalline structure. At the innermost region of Kepler-277b's core where pressures reach as high as 37.52 terapascals, iron exists in a body-centered-tetragonal (BCT) and body-centered cubic (BCC) crystalline structure.Kepler-277b has a relatively thin silicate mantle in comparison to its core. The mantle of Kepler-277b is thought be predominantly composed of ultrahigh-pressure phases of magnesium silicates (MgSiO3). The uppermost mantle of Kepler-277b is thought to consist of olivine, wadsleyite, and ringwoodite while the lower part of Kepler-277b's upper mantle consists of silicate perovskite and post-perovskite.
Characteristics:
Orbit Kepler-277b orbits close to its host star, with one orbit lasting 17.324 days. Its semi-major axis, or average distance from the parent object, is about 0.136 AU. For comparison, the planet Mercury in the Solar System takes 88 days to orbit at a distance of 0.38 AU. At this distance, Kepler-277b is very hot and most likely tidally locked to its host star. It is close to a 2:1 resonance with Kepler-277c, which orbits at an average distance of about 0.209 AU.
Characteristics:
Host star The parent star Kepler-277 is a large yellow star. It is 1.69 R☉ and 1.12 M☉, with a temperature of 5946 K, a metallicity of -0.315 [Fe/H], and an unknown age. For comparison, the Sun has a temperature of 5778 K, a metallicity of 0.00 [Fe/H], and an age of about 4.5 billion years. The large radius in comparison to its mass and temperature suggest that Kepler-277 could be a Subgiant star. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LAMMPS**
LAMMPS:
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of Message Passing Interface (MPI) for parallel communication and is free and open-source software, distributed under the terms of the GNU General Public License.LAMMPS was originally developed under a Cooperative Research and Development Agreement (CRADA) between two laboratories from United States Department of Energy and three other laboratories from private sector firms. As of 2016, it is maintained and distributed by researchers at the Sandia National Laboratories and Temple University.
Features:
For computing efficiency, LAMMPS uses neighbor lists (Verlet lists) to keep track of nearby particles. The lists are optimized for systems with particles that repel at short distances, so that the local density of particles never grows too large.On parallel computers, LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. Processors communicate and store ghost atom information for atoms that border their subdomain. LAMMPS is most efficient (in a parallel computing sense) for systems whose particles fill a 3D rectangular box with approximately uniform density.
Features:
LAMMPS also allows for coupled spin and molecular dynamics in an accelerated fashion.LAMMPS is coupled to many analysis tools and engines as well. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sea interferometry**
Sea interferometry:
Sea interferometry, also known as sea-cliff interferometry, is a form of radio astronomy that uses radio waves reflected off the sea to produce an interference pattern. It is the radio wave analogue to Lloyd's mirror. The technique was invented and exploited in Australia between 1945 and 1948.
Process:
A radio detecting antenna is placed on top of a cliff, which detects radio propagation coming directly from the source and radio waves reflected off the water surface. The two sets of waves are then combined to form an interference pattern such as that produced by two separate aerials. The reflected wavefront travels an additional distance 2h sin(i) before reaching the detector where h and i are the height of the cliff and the inclination (or altitude angle) of the incoming wavefront respectively. It acts as a second aerial twice the height of the cliff below the first.Sea interferometers are drift instruments, that is, they are fixed and their pointing direction changes with the rotation of the Earth.
Process:
The interference patterns for a sea interferometer commence sharply as soon as the source rises above the horizon, instead of fading in gradually as for a normal interferometer. Since it consists of just one detector, there is no need for connecting cables or for preamplifiers. A sea interferometer also has double the sensitivity of a pair of detectors set up to the same separation. Sea interferometry greatly increases the resolving power of the instrument.
Data quality:
The quality of data obtained by a sea interferometer is affected by a number of factors. Wind waves on the water surface and variable atmospheric refraction adversely affect the signal, and the curvature of Earth must be taken into account. These difficulties can be overcome by observing for extended periods, and calibrating the instrument on sources of known position.
Discoveries:
Among the discoveries made using sea interferometry are that sunspots emit strong radio waves and that the source of radio wave emission from Cygnus A is small (less than 8 arcminutes in diameter). The technique also discovered six new sources including Centaurus A. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cotangent sheaf**
Cotangent sheaf:
In algebraic geometry, given a morphism f: X → S of schemes, the cotangent sheaf on X is the sheaf of OX -modules ΩX/S that represents (or classifies) S-derivations in the sense: for any OX -modules F, there is an isomorphism Hom Der S(OX,F) that depends naturally on F. In other words, the cotangent sheaf is characterized by the universal property: there is the differential d:OX→ΩX/S such that any S-derivation D:OX→F factors as D=α∘d with some α:ΩX/S→F In the case X and S are affine schemes, the above definition means that ΩX/S is the module of Kähler differentials. The standard way to construct a cotangent sheaf (e.g., Hartshorne, Ch II. § 8) is through a diagonal morphism (which amounts to gluing modules of Kähler differentials on affine charts to get the globally-defined cotangent sheaf.) The dual module of the cotangent sheaf on a scheme X is called the tangent sheaf on X and is sometimes denoted by ΘX .There are two important exact sequences: If S →T is a morphism of schemes, then 0.
Cotangent sheaf:
If Z is a closed subscheme of X with ideal sheaf I, then 0.
The cotangent sheaf is closely related to smoothness of a variety or scheme. For example, an algebraic variety is smooth of dimension n if and only if ΩX is a locally free sheaf of rank n.
Construction through a diagonal morphism:
Let f:X→S be a morphism of schemes as in the introduction and Δ: X → X ×S X the diagonal morphism. Then the image of Δ is locally closed; i.e., closed in some open subset W of X ×S X (the image is closed if and only if f is separated). Let I be the ideal sheaf of Δ(X) in W. One then puts: ΩX/S=Δ∗(I/I2) and checks this sheaf of modules satisfies the required universal property of a cotangent sheaf (Hartshorne, Ch II. Remark 8.9.2). The construction shows in particular that the cotangent sheaf is quasi-coherent. It is coherent if S is Noetherian and f is of finite type.
Construction through a diagonal morphism:
The above definition means that the cotangent sheaf on X is the restriction to X of the conormal sheaf to the diagonal embedding of X over S.
Relation to a tautological line bundle:
The cotangent sheaf on a projective space is related to the tautological line bundle O(-1) by the following exact sequence: writing PRn for the projective space over a ring R, 0.
(See also Chern class#Complex projective space.)
Cotangent stack:
For this notion, see § 1 of A. Beilinson and V. Drinfeld, Quantization of Hitchin’s integrable system and Hecke eigensheaves [1] Archived 2015-01-05 at the Wayback MachineThere, the cotangent stack on an algebraic stack X is defined as the relative Spec of the symmetric algebra of the tangent sheaf on X. (Note: in general, if E is a locally free sheaf of finite rank, Sym (Eˇ)) is the algebraic vector bundle corresponding to E.) See also: Hitchin fibration (the cotangent stack of Bun G(X) is the total space of the Hitchin fibration.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mark J. Lewis**
Mark J. Lewis:
Dr. Mark J. Lewis is a senior American aerospace and defense executive with special expertise in hypersonics. He is currently the Executive Director of the National Defense Industrial Association's Emerging Technologies Institute, following his role in the second half of 2020 as the acting US Deputy Under Secretary of Defense for Research and Engineering, and before that the Director of Defense Research and Engineering for Modernization. He was the Chief Scientist of the U.S. Air Force, Washington, D.C. from 2004 to 2008 and was the longest-serving Chief Scientist in Air Force history. He served as chief scientific adviser to the Chief of Staff and Secretary of the Air Force, and provided assessments on a wide range of scientific and technical issues affecting the Air Force mission. In this role he identified and analyzed technical issues and brought them to attention of Air Force leaders, and interacted with other Air Staff principals, operational commanders, combatant commands, acquisition, and science & technology communities to address cross-organizational technical issues and solutions. His primary areas of focus included energy, sustainment, long-range strike technologies, advanced propulsion systems, and workforce development.
Mark J. Lewis:
He additionally interacted with other services and the Office of the Secretary of Defense on issues affecting the Air Force in-house technical enterprise. He also served on the Steering Committee and Senior Review Group of the U.S. Air Force Scientific Advisory Board (SAB), and was the principal science and technology representative of the Air Force to the civilian scientific and engineering community and to the public at large. He is currently a member of the U.S. Air Force Scientific Advisory Board and Director of the Science and Technology Policy Institute.
Biography:
Dr. Lewis joined the faculty of the Aerospace Engineering department of the Clark School at the University of Maryland in College Park in August 1988. He has conducted basic and applied research in, and taught many aspects of, hypersonic aerodynamics, advanced propulsion, and space vehicle design and optimization. His work has spanned the aerospace flight spectrum, from the analysis of conventional jet engines to entry into planetary atmospheres at hypervelocity speeds. His research activities have contributed directly to several NASA and Department of Defense programs in the areas of high-speed vehicle and spacecraft design. Lewis was the founder of the Center for Hypersonic Education and Research, and later the NASA-Air Force Constellation University Institutes Project.
Biography:
Dr. Lewis was formerly the Willis Young Jr. Professor and Chair of the Department of Aerospace Engineering at the University of Maryland at College Park (stepping down in April 2012). He was also formerly president of the American Institute of Aeronautics and Astronautics (AIAA). He is an author of over 280 technical publications and has served as the research advisor to more than 60 graduate students. He is active in national and international professional societies, with responsibilities for both research and educational policy and support. In addition, he has served on various advisory boards for the Air Force and DOD, including two terms on the Air Force Scientific Advisory Board, where he participated in several summer studies and chaired a number of science and technology reviews of the Air Force Research Laboratory. Dr Lewis chaired the National Academies of Science, Engineering, and Medicine on the threat of competitor nations developing hypersonic weapons, which is generally credited with triggering the Department of Defense's significant increase in spending in this area. He was previously on leave from the University of Maryland from 2012-2018, while serving as the Director of the Science and Technology Policy Institute in the Institute for Defense Analysis.
Biography:
In November 2019, Lewis rejoined the Department of Defense as Director of Defense Research and Engineering for Modernization. In July 2020, he also became acting Deputy Under Secretary of Defense for Research and Engineering after the resignation of Lisa Porter.At the Massachusetts Institute of Technology, Lewis received two Bachelor of Science degrees (in aeronautics and astronautics and in earth and planetary science), and Master of Science and Doctor of Science degrees in aeronautics and astronautics. He is an honorary fellow of the AIAA and a fellow of the American Society of Mechanical Engineers, a President's Fellow of the Royal Aeronautical Society, and was named an aerospace Laureate by the editors of Aviation Week and Space Technology magazine for his pioneering efforts in promoting research and development of high-speed flight.
Education:
1984 Bachelor of Science (BS), Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 1984 Bachelor of Science (BS), Earth and Planetary Science, Massachusetts Institute of Technology, Cambridge, MA 1985 Master of Science, (S.M), Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 1988 Doctor of Science (ScD), Massachusetts Institute of Technology, Cambridge, MA
Career chronology:
1988–1999, Assistant Professor, later, Associate Professor of Aerospace Engineering, A. James Clark School of Engineering, University of Maryland, College Park 1999–2004, Professor and Associate Chair of Aerospace Engineering, A. James Clark School of Engineering, University of Maryland, College Park 2002–2004, Director, Space Vehicle Technology Institute, College Park, Md.
2004–2008, Chief Scientist of the U.S. Air Force, Washington, D.C.
2008 – present, Professor of Aerospace Engineering, A. James Clark School of Engineering, University of Maryland, College Park 2009–2012, Chair of Aerospace Engineering, A. James Clark School of Engineering, University of Maryland, College Park 2009–2010, President-elect, American Institute of Aeronautics and Astronautics 2010–2011, President, American Institute of Aeronautics and Astronautics
Awards and honors:
1984 Henry Webb Salisbury Award, MIT 1984 Office of Naval Research Fellow 1989 E. Robert Kent Teaching Award 1992 A. James Clark Service Award* 1994 National Capital Section Young Scientist/Engineer of the Year, American Institute of Aeronautics and Astronautics 1997 Aerospace Professor of the Year, University of Maryland 1998 Abe Zarem Award mentor, AIAA 2004 Meritorious Civilian Service Award 2004 Exceptional Civilian Service Award 2007 Aviation Week and Space Technology Laureate IECEC/AIAA Lifetime Achievement Award 2014 AIAA Dryden Distinguished Lectureship in Research 2018 Air Force Association Theodore von Karman Award for the most outstanding contribution in the field of science and engineering
Professional memberships and associations:
National Institute of Aerospace, (Fellow) American Institute of Aeronautics and Astronautics (Fellow) Royal Aeronautical Society (President's Fellow) American Society of Mechanical Engineers (Fellow) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polar wind**
Polar wind:
The polar wind or plasma fountain is a permanent outflow of plasma from the polar regions of Earth's magnetosphere, caused by the interaction between the solar wind and the Earth's atmosphere. The solar wind ionizes gas molecules in the upper atmosphere to such high energy that some of them reach escape velocity and pour into space. A considerable percentage of these ions remain bound inside Earth's magnetic field, where they form part of the radiation belts.
Polar wind:
The term was coined in 1968 in a pair of articles by Banks and Holzer and by Ian Axford. Since the process by which the ionospheric plasma flows away from the Earth along magnetic field lines is similar to the flow of solar plasma away from the sun's corona (the solar wind), Axford suggested the term "polar wind." The idea for the polar wind originated with the desire to solve the paradox of the terrestrial helium budget. This paradox consists of the fact that helium in the Earth's atmosphere seems to be produced (via radioactive decay of uranium and thorium) faster than it is lost by escaping from the upper atmosphere. The realization that some helium could be ionized, and therefore escape the earth along open magnetic field lines near the magnetic poles (the 'polar wind'), is one possible solution to the paradox.
Polar wind:
Further research came from the Retarding Ion Mass Spectrometer instrument on the Dynamics Explorer spacecraft, in the 1980s. Recently, the SCIFER sounding rocket was launched into the plasma heating region of the fountain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autophagy**
Autophagy:
Autophagy (or autophagocytosis; from the Ancient Greek αὐτόφαγος, autóphagos, meaning "self-devouring" and κύτος, kýtos, meaning "hollow") is the natural, conserved degradation of the cell that removes unnecessary or dysfunctional components through a lysosome-dependent regulated mechanism. It allows the orderly degradation and recycling of cellular components. Although initially characterized as a primordial degradation pathway induced to protect against starvation, it has become increasingly clear that autophagy also plays a major role in the homeostasis of non-starved cells. Defects in autophagy have been linked to various human diseases, including neurodegeneration and cancer, and interest in modulating autophagy as a potential treatment for these diseases has grown rapidly.Four forms of autophagy have been identified: macroautophagy, microautophagy, chaperone-mediated autophagy (CMA), and crinophagy. In macroautophagy (the most thoroughly researched form of autophagy), cytoplasmic components (like mitochondria) are targeted and isolated from the rest of the cell within a double-membrane vesicle known as an autophagosome, which, in time, fuses with an available lysosome, bringing its specialty process of waste management and disposal; and eventually the contents of the vesicle (now called an autolysosome) are degraded and recycled. In crinophagy (the least well-known and researched form of autophagy), unnecessary secretory granules are degraded and recycled.In disease, autophagy has been seen as an adaptive response to stress, promoting survival of the cell; but in other cases, it appears to promote cell death and morbidity. In the extreme case of starvation, the breakdown of cellular components promotes cellular survival by maintaining cellular energy levels.
Autophagy:
The word "autophagy" was in existence and frequently used from the middle of the 19th century. In its present usage, the term autophagy was coined by Belgian biochemist Christian de Duve in 1963 based on his discovery of the functions of lysosome. The identification of autophagy-related genes in yeast in the 1990s allowed researchers to deduce the mechanisms of autophagy, which eventually led to the award of the 2016 Nobel Prize in Physiology or Medicine to Japanese researcher Yoshinori Ohsumi.
History:
Autophagy was first observed by Keith R. Porter and his student Thomas Ashford at the Rockefeller Institute. In January 1962 they reported an increased number of lysosomes in rat liver cells after the addition of glucagon, and that some displaced lysosomes towards the centre of the cell contained other cell organelles such as mitochondria. They called this autolysis after Christian de Duve and Alex B. Novikoff. However Porter and Ashford wrongly interpreted their data as lysosome formation (ignoring the pre-existing organelles). Lysosomes could not be cell organelles, but part of cytoplasm such as mitochondria, and that hydrolytic enzymes were produced by microbodies. In 1963 Hruban, Spargo and colleagues published a detailed ultrastructural description of "focal cytoplasmic degradation", which referenced a 1955 German study of injury-induced sequestration. Hruban, Spargo and colleagues recognized three continuous stages of maturation of the sequestered cytoplasm to lysosomes, and that the process was not limited to injury states that functioned under physiological conditions for "reutilization of cellular materials", and the "disposal of organelles" during differentiation. Inspired by this discovery, de Duve christened the phenomena "autophagy". Unlike Porter and Ashford, de Duve conceived the term as a part of lysosomal function while describing the role of glucagon as a major inducer of cell degradation in the liver. With his student Russell Deter, he established that lysosomes are responsible for glucagon-induced autophagy. This was the first time the fact that lysosomes are the sites of intracellular autophagy was established.In the 1990s several groups of scientists independently discovered autophagy-related genes using the budding yeast. Notably, Yoshinori Ohsumi and Michael Thumm examined starvation-induced non-selective autophagy; in the meantime, Daniel J. Klionsky discovered the cytoplasm-to-vacuole targeting (CVT) pathway, which is a form of selective autophagy. They soon found that they were in fact looking at essentially the same pathway, just from different angles. Initially, the genes discovered by these and other yeast groups were given different names (APG, AUT, CVT, GSA, PAG, PAZ, and PDD). A unified nomenclature was advocated in 2003 by the yeast researchers to use ATG to denote autophagy genes. The 2016 Nobel Prize in Physiology or Medicine was awarded to Yoshinori Ohsumi, although some have pointed out that the award could have been more inclusive.The field of autophagy research experienced accelerated growth at the turn of the 21st century. Knowledge of ATG genes provided scientists more convenient tools to dissect functions of autophagy in human health and disease. In 1999, a landmark discovery connecting autophagy with cancer was published by Beth Levine's group. To this date, relationship between cancer and autophagy continues to be a main theme of autophagy research. The roles of autophagy in neurodegeneration and immune defense also received considerable attention. In 2003, the first Gordon Research Conference on autophagy was held at Waterville. In 2005, Daniel J Klionsky launched Autophagy, a scientific journal dedicated to this field. The first Keystone Symposia Conference on autophagy was held in 2007 at Monterey. In 2008, Carol A Mercer created a BHMT fusion protein (GST-BHMT), which showed starvation-induced site-specific fragmentation in cell lines. The degradation of betaine homocysteine methyltransferase (BHMT), a metabolic enzyme, could be used to assess autophagy flux in mammalian cells.
History:
Macro, micro, and Chaperone mediated autophagy are mediated by autophagy-related genes and their associated enzymes. Macroautophagy is then divided into bulk and selective autophagy. In the selective autophagy is the autophagy of organelles; mitophagy, lipophagy, pexophagy, chlorophagy, ribophagy and others.
History:
Macroautophagy is the main pathway, used primarily to eradicate damaged cell organelles or unused proteins. First the phagophore engulfs the material that needs to be degraded, which forms a double membrane known as an autophagosome, around the organelle marked for destruction. The autophagosome then travels through the cytoplasm of the cell to a lysosome in mammals, or vacuoles in yeast and plants, and the two organelles fuse. Within the lysosome/vacuole, the contents of the autophagosome are degraded via acidic lysosomal hydrolase.Microautophagy, on the other hand, involves the direct engulfment of cytoplasmic material into the lysosome. This occurs by invagination, meaning the inward folding of the lysosomal membrane, or cellular protrusion.Chaperone-mediated autophagy, or CMA, is a very complex and specific pathway, which involves the recognition by the hsc70-containing complex. This means that a protein must contain the recognition site for this hsc70 complex which will allow it to bind to this chaperone, forming the CMA- substrate/chaperone complex. This complex then moves to the lysosomal membrane-bound protein that will recognise and bind with the CMA receptor. Upon recognition, the substrate protein gets unfolded and it is translocated across the lysosome membrane with the assistance of the lysosomal hsc70 chaperone. CMA is significantly different from other types of autophagy because it translocates protein material in a one by one manner, and it is extremely selective about what material crosses the lysosomal barrier.Mitophagy is the selective degradation of mitochondria by autophagy. It often occurs to defective mitochondria following damage or stress. Mitophagy promotes the turnover of mitochondria and prevents the accumulation of dysfunctional mitochondria which can lead to cellular degeneration. It is mediated by Atg32 (in yeast) and NIX and its regulator BNIP3 in mammals. Mitophagy is regulated by PINK1 and parkin proteins. The occurrence of mitophagy is not limited to the damaged mitochondria but also involves undamaged ones.Lipophagy is the degradation of lipids by autophagy, a function which has been shown to exist in both animal and fungal cells. The role of lipophagy in plant cells, however, remains elusive. In lipophagy the target are lipid structures called lipid droplets (LDs), spheric "organelles" with a core of mainly triacylglycerols (TAGs) and a unilayer of phospholipids and membrane proteins. In animal cells the main lipophagic pathway is via the engulfment of LDs by the phagophore, macroautophagy. In fungal cells on the other hand microplipophagy constitutes the main pathway and is especially well studied in the budding yeast Saccharomyces cerevisiae. Lipophagy was first discovered in mice and published 2009.
Targeted interplay between bacterial pathogens and host autophagy:
Autophagy targets genus-specific proteins, so orthologous proteins which share sequence homology with each other are recognized as substrates by a particular autophagy targeting protein. There exists a complementarity of autophagy targeting proteins which potentially increase infection risk upon mutation. The lack of overlap among the targets of the 3 autophagy proteins and the large overlap in terms of the genera show that autophagy could target different sets of bacterial proteins from a same pathogen. On one hand, the redundancy in targeting a same genera is beneficial for robust pathogen recognition. But, on the other hand, the complementarity in the specific bacterial proteins could make the host more susceptible to chronic disorders and infections if the gene encoding one of the autophagy targeting proteins becomes mutated, and the autophagy system is overloaded or suffers other malfunctions. Moreover, autophagy targets virulence factors and virulence factors responsible for more general functions such as nutrient acquisition and motility are recognized by multiple autophagy targeting proteins. And the specialized virulence factors such as autolysins, and iron sequestering proteins are potentially recognized uniquely by a single autophagy targeting protein. The autophagy proteins CALCOCO2/NDP52 and MAP1LC3/LC3 may have evolved specifically to target pathogens or pathogenic proteins for autophagic degradation. While SQSTM1/p62 targets more generic bacterial proteins containing a target motif but not related to virulence.On the other hand, bacterial proteins from various pathogenic genera are also able to modulate autophagy. There are genus-specific patterns in the phases of autophagy that are potentially regulated by a given pathogen group. Some autophagy phases can only be modulated by particular pathogens, while some phases are modulated by multiple pathogen genera. Some of the interplay-related bacterial proteins have proteolytic and post-translational activity such as phosphorylation and ubiquitination and can interfere with the activity of autophagy proteins.
Molecular biology:
Autophagy is executed by autophagy-related (Atg) genes. Prior to 2003, ten or more names were used, but after this point a unified nomenclature was devised by fungal autophagy researchers. Atg or ATG stands for autophagy related. It does not specify gene or a protein.The first autophagy genes were identified by genetic screens conducted in Saccharomyces cerevisiae. Following their identification those genes were functionally characterized and their orthologs in a variety of different organisms were identified and studied. Today, thirty-six Atg proteins have been classified as especially important for autophagy, of which 18 belong to the core machineryIn mammals, amino acid sensing and additional signals such as growth factors and reactive oxygen species regulate the activity of the protein kinases mTOR and AMPK. These two kinases regulate autophagy through inhibitory phosphorylation of the Unc-51-like kinases ULK1 and ULK2 (mammalian homologues of Atg1). Induction of autophagy results in the dephosphorylation and activation of the ULK kinases. ULK is part of a protein complex containing Atg13, Atg101 and FIP200. ULK phosphorylates and activates Beclin-1 (mammalian homologue of Atg6), which is also part of a protein complex. The autophagy-inducible Beclin-1 complex contains the proteins PIK3R4(p150), Atg14L and the class III phosphatidylinositol 3-phosphate kinase (PI(3)K) Vps34. The active ULK and Beclin-1 complexes re-localize to the site of autophagosome initiation, the phagophore, where they both contribute to the activation of downstream autophagy components.Once active, VPS34 phosphorylates the lipid phosphatidylinositol to generate phosphatidylinositol 3-phosphate (PtdIns(3)P) on the surface of the phagophore. The generated PtdIns(3)P is used as a docking point for proteins harboring a PtdIns(3)P binding motif. WIPI2, a PtdIns(3)P binding protein of the WIPI (WD-repeat protein interacting with phosphoinositides) protein family, was recently shown to physically bind ATG16L1. Atg16L1 is a member of an E3-like protein complex involved in one of two ubiquitin-like conjugation systems essential for autophagosome formation. The FIP200 cis-Golgi-derived membranes fuse with ATG16L1-positive endosomal membranes to form the prophagophore termed HyPAS (hybrid pre-autophagosomal structure). ATG16L1 binding to WIPI2 mediates ATG16L1's activity. This leads to downstream conversion of prophagophore into ATG8-positive phagophore via a ubiquitin-like conjugation system.
Molecular biology:
The first of the two ubiquitin-like conjugation systems involved in autophagy covalently binds the ubiquitin-like protein Atg12 to Atg5. The resulting conjugate protein then binds ATG16L1 to form an E3-like complex which functions as part of the second ubiquitin-like conjugation system. This complex binds and activates Atg3, which covalently attaches mammalian homologues of the ubiquitin-like yeast protein ATG8 (LC3A-C, GATE16, and GABARAPL1-3), the most studied being LC3 proteins, to the lipid phosphatidylethanolamine (PE) on the surface of autophagosomes. Lipidated LC3 contributes to the closure of autophagosomes, and enables the docking of specific cargos and adaptor proteins such as Sequestosome-1/p62. The completed autophagosome then fuses with a lysosome through the actions of multiple proteins, including SNAREs and UVRAG. Following the fusion LC3 is retained on the vesicle's inner side and degraded along with the cargo, while the LC3 molecules attached to the outer side are cleaved off by Atg4 and recycled. The contents of the autolysosome are subsequently degraded and their building blocks are released from the vesicle through the action of permeases.Sirtuin 1 (SIRT1) stimulates autophagy by preventing acetylation of proteins (via deacetylation) required for autophagy as demonstrated in cultured cells and embryonic and neonatal tissues. This function provides a link between sirtuin expression and the cellular response to limited nutrients due to caloric restriction.
Functions:
Nutrient starvation Autophagy has roles in various cellular functions. One particular example is in yeasts, where the nutrient starvation induces a high level of autophagy. This allows unneeded proteins to be degraded and the amino acids recycled for the synthesis of proteins that are essential for survival. In higher eukaryotes, autophagy is induced in response to the nutrient depletion that occurs in animals at birth after severing off the trans-placental food supply, as well as that of nutrient starved cultured cells and tissues. Mutant yeast cells that have a reduced autophagic capability rapidly perish in nutrition-deficient conditions. Studies on the apg mutants suggest that autophagy via autophagic bodies is indispensable for protein degradation in the vacuoles under starvation conditions, and that at least 15 APG genes are involved in autophagy in yeast. A gene known as ATG7 has been implicated in nutrient-mediated autophagy, as mice studies have shown that starvation-induced autophagy was impaired in atg7-deficient mice.
Functions:
Infection Vesicular stomatitis virus is believed to be taken up by the autophagosome from the cytosol and translocated to the endosomes where detection takes place by a pattern recognition receptor called toll-like receptor 7, detecting single stranded RNA. Following activation of the toll-like receptor, intracellular signaling cascades are initiated, leading to induction of interferon and other antiviral cytokines. A subset of viruses and bacteria subvert the autophagic pathway to promote their own replication. Galectin-8 has recently been identified as an intracellular "danger receptor", able to initiate autophagy against intracellular pathogens. When galectin-8 binds to a damaged vacuole, it recruits an autophagy adaptor such as NDP52 leading to the formation of an autophagosome and bacterial degradation.
Functions:
Repair mechanism Autophagy degrades damaged organelles, cell membranes and proteins, and insufficient autophagy is thought to be one of the main reasons for the accumulation of damaged cells and aging. Autophagy and autophagy regulators are involved in response to lysosomal damage, often directed by galectins such as galectin-3 and galectin-8.
Functions:
Programmed cell death One of the mechanisms of programmed cell death (PCD) is associated with the appearance of autophagosomes and depends on autophagy proteins. This form of cell death most likely corresponds to a process that has been morphologically defined as autophagic PCD. One question that constantly arises, however, is whether autophagic activity in dying cells is the cause of death or is actually an attempt to prevent it. Morphological and histochemical studies have not so far proved a causative relationship between the autophagic process and cell death. In fact, there have recently been strong arguments that autophagic activity in dying cells might actually be a survival mechanism. Studies of the metamorphosis of insects have shown cells undergoing a form of PCD that appears distinct from other forms; these have been proposed as examples of autophagic cell death. Recent pharmacological and biochemical studies have proposed that survival and lethal autophagy can be distinguished by the type and degree of regulatory signaling during stress particularly after viral infection. Although promising, these findings have not been examined in non-viral systems.
Exercise:
Autophagy is essential for basal homeostasis; it is also extremely important in maintaining muscle homeostasis during physical exercise. Autophagy at the molecular level is only partially understood. A study of mice shows that autophagy is important for the ever-changing demands of their nutritional and energy needs, particularly through the metabolic pathways of protein catabolism. In a 2012 study conducted by the University of Texas Southwestern Medical Center in Dallas, mutant mice (with a knock-in mutation of BCL2 phosphorylation sites to produce progeny that showed normal levels of basal autophagy yet were deficient in stress-induced autophagy) were tested to challenge this theory. Results showed that when compared to a control group, these mice illustrated a decrease in endurance and an altered glucose metabolism during acute exercise.Another study demonstrated that skeletal muscle fibers of collagen VI in knockout mice showed signs of degeneration due to an insufficiency of autophagy which led to an accumulation of damaged mitochondria and excessive cell death. Exercise-induced autophagy was unsuccessful however; but when autophagy was induced artificially post-exercise, the accumulation of damaged organelles in collagen VI deficient muscle fibres was prevented and cellular homeostasis was maintained. Both studies demonstrate that autophagy induction may contribute to the beneficial metabolic effects of exercise and that it is essential in the maintaining of muscle homeostasis during exercise, particularly in collagen VI fibers.Work at the Institute for Cell Biology, University of Bonn, showed that a certain type of autophagy, i.e. chaperone-assisted selective autophagy (CASA), is induced in contracting muscles and is required for maintaining the muscle sarcomere under mechanical tension. The CASA chaperone complex recognizes mechanically damaged cytoskeleton components and directs these components through a ubiquitin-dependent autophagic sorting pathway to lysosomes for disposal. This is necessary for maintaining muscle activity.
Osteoarthritis:
Because autophagy decreases with age and age is a major risk factor for osteoarthritis, the role of autophagy in the development of this disease is suggested. Proteins involved in autophagy are reduced with age in both human and mouse articular cartilage. Mechanical injury to cartilage explants in culture also reduced autophagy proteins. Autophagy is constantly activated in normal cartilage but it is compromised with age and precedes cartilage cell death and structural damage. Thus autophagy is involved in a normal protective process (chondroprotection) in the joint.
Cancer:
Cancer often occurs when several different pathways that regulate cell differentiation are disturbed. Autophagy plays an important role in cancer – both in protecting against cancer as well as potentially contributing to the growth of cancer. Autophagy can contribute to cancer by promoting survival of tumor cells that have been starved, or that degrade apoptotic mediators through autophagy: in such cases, use of inhibitors of the late stages of autophagy (such as chloroquine), on the cells that use autophagy to survive, increases the number of cancer cells killed by antineoplastic drugs.The role of autophagy in cancer is one that has been highly researched and reviewed. There is evidence that emphasizes the role of autophagy as both a tumor suppressor and a factor in tumor cell survival. Recent research has shown, however, that autophagy is more likely to be used as a tumor suppressor according to several models.
Cancer:
Tumor suppressor Several experiments have been done with mice and varying Beclin1, a protein that regulates autophagy. When the Beclin1 gene was altered to be heterozygous (Beclin 1+/-), the mice were found to be tumor-prone. However, when Beclin1 was overexpressed, tumor development was inhibited. Care should be exercised when interpreting phenotypes of beclin mutants and attributing the observations to a defect in autophagy, however: Beclin1 is generally required for phosphatidylinositol 3- phosphate production and as such it affects numerous lysosomal and endosomal functions, including endocytosis and endocytic degradation of activated growth factor receptors. In support of the possibility that Beclin1 affects cancer development through an autophagy-independent pathway is the fact that core autophagy factors which are not known to affect other cellular processes and are definitely not known to affect cell proliferation and cell death, such as Atg7 or Atg5, show a much different phenotype when the respective gene is knocked out, which does not include tumor formation. In addition, full knockout of Beclin1 is embryonic lethal whereas knockout of Atg7 or Atg5 is not.
Cancer:
Necrosis and chronic inflammation also has been shown to be limited through autophagy which helps protect against the formation of tumor cells.
Mechanism of cell death Cells that undergo an extreme amount of stress experience cell death either through apoptosis or necrosis. Prolonged autophagy activation leads to a high turnover rate of proteins and organelles. A high rate above the survival threshold may kill cancer cells with a high apoptotic threshold. This technique can be utilized as a therapeutic cancer treatment.
Cancer:
Tumor cell survival Alternatively, autophagy has also been shown to play a large role in tumor cell survival. In cancerous cells, autophagy is used as a way to deal with stress on the cell. Induction of autophagy by miRNA-4673, for example, is a pro-survival mechanism that improves the resistance of cancer cells to radiation. Once these autophagy related genes were inhibited, cell death was potentiated. The increase in metabolic energy is offset by autophagy functions. These metabolic stresses include hypoxia, nutrient deprivation, and an increase in proliferation. These stresses activate autophagy in order to recycle ATP and maintain survival of the cancerous cells. Autophagy has been shown to enable continued growth of tumor cells by maintaining cellular energy production. By inhibiting autophagy genes in these tumors cells, regression of the tumor and extended survival of the organs affected by the tumors were found. Furthermore, inhibition of autophagy has also been shown to enhance the effectiveness of anticancer therapies.
Cancer:
Therapeutic target New developments in research have found that targeted autophagy may be a viable therapeutic solution in fighting cancer. As discussed above, autophagy plays both a role in tumor suppression and tumor cell survival. Thus, the qualities of autophagy can be used as a strategy for cancer prevention. The first strategy is to induce autophagy and enhance its tumor suppression attributes. The second strategy is to inhibit autophagy and thus induce apoptosis.The first strategy has been tested by looking at dose-response anti-tumor effects during autophagy-induced therapies. These therapies have shown that autophagy increases in a dose-dependent manner. This is directly related to the growth of cancer cells in a dose-dependent manner as well. These data support the development of therapies that will encourage autophagy. Secondly, inhibiting the protein pathways directly known to induce autophagy may also serve as an anticancer therapy.The second strategy is based on the idea that autophagy is a protein degradation system used to maintain homeostasis and the findings that inhibition of autophagy often leads to apoptosis. Inhibition of autophagy is riskier as it may lead to cell survival instead of the desired cell death.
Cancer:
Negative regulators of autophagy Negative regulators of autophagy, such as mTOR, cFLIP, EGFR, (GAPR-1), and Rubicon are orchestrated to function within different stages of the autophagy cascade. The end-products of autophagic digestion may also serve as a negative-feedback regulatory mechanism to stop prolonged activity.
The interface between inflammation and autophagy:
Regulators of autophagy control regulators of inflammation, and vice versa.
The interface between inflammation and autophagy:
Cells of vertebrate organisms normally activate inflammation to enhance the capacity of the immune system to clear infections and to initiate the processes that restore tissue structure and function. Therefore, it is critical to couple regulation of mechanisms for removal of cellular and bacterial debris to the principal factors that regulate inflammation: The degradation of cellular components by the lysosome during autophagy serves to recycle vital molecules and generate a pool of building blocks to help the cell respond to a changing microenvironment. Proteins that control inflammation and autophagy form a network that is critical for tissue functions, which is dysregulated in cancer: In cancer cells, aberrantly expressed and mutant proteins increase the dependence of cell survival on the “rewired” network of proteolytic systems that protects malignant cells from apoptotic proteins and from recognition by the immune system. This renders cancer cells vulnerable to intervention on regulators of autophagy.
Parkinson’s disease:
Parkinson’s disease is a neurodegenerative disorder partially caused by the cell death of brain and brain stem cells in many nuclei like the substantia nigra. Parkinson's disease is characterized by inclusions of a protein called alpha-synuclien (Lewy bodies) in affected neurons that cells cannot break down. Deregulation of the autophagy pathway and mutation of alleles regulating autophagy are believed to cause neurodegenerative diseases. Autophagy is essential for neuronal survival. Without efficient autophagy, neurons gather ubiquitinated protein aggregates and degrade. Ubiquitinated proteins are proteins that have been tagged with ubiquitin to get degraded. Mutations of synuclein alleles lead to lysosome pH increase and hydrolase inhibition. As a result, lysosomes degradative capacity is decreased. There are several genetic mutations implicated in the disease, including loss of function PINK1 and Parkin. Loss of function in these genes can lead to damaged mitochondrial accumulation and protein aggregates that can lead to cellular degeneration. Mitochondria is involved in Parkinson's disease. In idiopathic Parkinson's disease, the disease is commonly caused by dysfunctional mitochondria, cellular oxidative stress, autophagic alterations and the aggregation of proteins. These can lead to mitochondrial swelling and depolarization.
Type 2 diabetes:
Excessive activity of the crinophagy form of autophagy in the insulin-producing beta cells of the pancreas could reduce the quantity of insulin available for secretion, leading to type 2 diabetes.
Significance of autophagy as a drug target:
Since dysregulation of autophagy is involved in the pathogenesis of a broad range of diseases, great efforts are invested to identify and characterize small synthetic or natural molecules that can regulate it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CDKL5 deficiency disorder**
CDKL5 deficiency disorder:
CDKL5 deficiency disorder (CDD) is a rare genetic disorder caused by pathogenic variants in the gene CDKL5.
Signs and symptoms:
The symptoms of CDD include early infantile onset refractory epilepsy; hypotonia; developmental, intellectual, and motor disabilities, with little or no speech; and cortical visual impairment. Patients usually present first with seizures within the first months of life, followed by infantile spasms which progress to epileptic seizures that are largely refractory to treatment. Development of gross motor skills, such as sitting, standing, and walking, is severely delayed, along with restricted fine motor skills. About one-third of affected individuals can ambulate with assistance, but most CDD patients rely on wheelchairs. Additional features include repetitive hand movements (stereotypies), such as clapping, hand licking, and hand sucking; tooth grinding (bruxism); disrupted sleep; feeding difficulties; gastrointestinal problems including constipation and gastroesophageal reflux. Some patients show irregular breathing.
Cause:
CDD is caused by pathogenic variants in the gene CDKL5. This gene provides instructions for making a protein (cyclin-dependent kinase-like 5) that is essential for normal brain development and function. The CDKL5 protein is widely expressed in the brain, predominantly in nerve cells (neurons), with roles in cell proliferation, neuronal migration, axonal outgrowth, dendritic morphogenesis, and synapse development. Inheritance Pattern: the CDKL5 gene is located on the X chromosome but nearly all known pathogenic mutations are de novo, rather than being inherited from an affected mother or father; the profound neurodevelopmental disabilities of CDD patients makes it extremely unlikely they would have children. There is one reported case of an inherited CDKL5 mutation; a mother carried a CDKL5 mutation on one X chromosome, but was high functioning and showed only mild cognitive impairment. The mother's mutant CDKL5 allele was skewed in its X-inactivation, being expressed in only 20% of circulating lymphoblasts. However, her daughter, who was diagnosed with CDD, expressed the mutant in 50% of her circulating lymphoblasts.
Cause:
Females: a mutation in one of the two copies of the CDKL5 gene in each cell causes the disorder. Males: a mutation in the only copy of the gene causes the disorder.
Diagnosis:
For the clinical diagnosis of CDKL5 Deficiency Disorder, minimal diagnostic criteria have been established, including motor and cognitive delays, epilepsy with onset within the first year of life, and the presence of a pathogenic or likely-pathogenic mutation of the CDKL5 gene. While initial diagnosis is based mostly on clinical suspicion, definitive diagnosis requires confirmation by genetic testing. The first presentation of epileptic seizures within the first few months of life would suggest a possible diagnosis of CDD. Initial clinical testing for differential diagnosis may include MRI and CSF testing for structural or infectious etiologies; however, CDKL5 is now widely included in DNA sequence-based molecular diagnostic gene panels or infantile epilepsy for more rapid and precise diagnosis. Note: many adolescents and young adults may have CDD but were never tested since such tests were not available when they were infants. Therefore, epilepsy panels for CDD and other genes should be considered in such individuals.A diagnostic ICD-10 code has been assigned to CDKL5 deficiency disorder: G40.42 (since 2020).
Treatment:
Antiseizure medications (ASMs) are used to manage seizures; however, in most cases, control is partial or transient. Commonly used ASMs include valproic acid, clobazam, vigabatrin, felbamate, steroids, and lamotrigine for seizures, although comprehensive data are limited for the efficacy and safety of ASMs in CDD; medications for infantile spasms include ACTH, prednisolone, and vigabatrin for infantile spasms. Clinical trials support the efficacy some new ASMs. Currently, there are no specifically approved therapeutics for the symptoms of CDD, although clinical trials for the treatment of CDD symptoms are currently underway, as both phase 2 and phase 3 studies. Medications to control GI and sleep disturbances are often prescribed. Therapies, including physical, occupational, and vision therapy, are recommended. Specialized diets, such as the ketogenic diet, have been reported to help manage seizures, though the effect is often partial and transitory.
Prognosis:
The long-term prognosis for patients with CDD is not fully known, as the disorder was identified approximately ten years ago. Clinical research on the natural history of CDD is required but some CDD patients are over 60 years of age. The average life expectancy for CDD patients remains unknown.
Epidemiology:
The incidence rate of CDD is ~1 in 42,000 live births This is based on both the calculated incidence rate for CDKL5 pathogenic mutations in a study population, as well as comparison studies in genetic testing cohorts, in which the frequency of CDKL5 mutations is compared to that of genes whose associated disorders have more robust incidence estimates, such as SCN1A for Dravet syndrome.
History:
CDD is a rare condition although >1,000 cases have been reported worldwide; 80-90% of the cases are female While originally classified as an atypical variant of Rett syndrome, CDKL5 Deficiency Disorder (CDD) is an independent disorder and results from a pathogenic variant in a different gene (CDKL5 in CDD; MECP2 in Rett). The FDA accepted the indication and approved the first pivotal trials specifically for CDD and in 2019, a diagnostic ICD-10 code was issued for CDD by the World Health Organization: G40.42.
Research:
The goal of understanding the genetics and molecular biology of CDD is to establish effective therapies for CDD, targeting the underlying biologic pathways. Novel therapeutics may include small molecules or genetic or genomic therapies.
Research:
Several efforts are underway to develop small molecule therapeutics to better control seizures, as well as provide management of other non-seizure symptoms, in CDD patients. These efforts include phase 2 and phase 3 trials already underway or completed, and others in earlier stages of development. If successful, these clinical studies may result in better symptomatic treatments that can provide significant benefit to patients and families in the short term.
Research:
In the long term, several independent efforts are advancing truly disease-modifying therapeutics, which are directed at the causative CDKL5 mutation itself. These disease-modifying therapies are hoped to provide broader and more durable therapeutic benefit, and even eventual cures. These include publicly announced clinical and pre-clinical programs in AAV-based gene replacement; genome targeting approaches such as base editing; and inactive X chromosome reactivation.
Research:
Clinical trials Small molecule therapeutic development Phase 3 clinical trial in CDD with ganaxolone (Marinus) On June 29, 2017, Marinus Pharmaceuticals announced that the US Food and Drug Administration had granted Orphan Drug Designation to their small molecule, ganaxolone, for the treatment of CDKL5 Deficiency Disorder.
Research:
On November 13, 2019, ganaxolone was also awarded Orphan Designation by the European Commission for treatment of CDKL5 Deficiency Disorder Phase 2 clinical trial in CDD with soticlestat (Ovid), a novel medication that modulates an enzyme which is thought to impact the N-methyl-D-aspartate receptor system open-label phase 2 clinical trial in CDD with cannabidiol (Epidiolex®) New Phase 2 clinical trial in CDD with fenfluramine Gene therapy Amicus announced a collaboration around a new AAV (gene therapy)-based technology to complement their enzyme-replacement therapy in development for CDD Several public presentations have been made on pre-clinical AAV-based gene therapy programs from Ultragenyx and UPenn Gene Therapy Program (see abstracts of American Society of Gene and Cell Therapy, May 2020: Molecular biology of CDD is revealing further opportunities in precision therapyNonsense mutations in the CDKL5 gene could be suppressed by compounds such as ataluren; (PTC Therapeutics), or similar next-generation translation stop readthrough compounds.
Research:
Genome editing: Pre-clinical programs in Prime-Editing-directed correction of mutations in the CDKL5 gene have been reported (CDKL5 Forum 2020: X reactivation: in female patients heterozygous for the CDKL5 mutation, each cell expressing the mutant protein also carries a fully functional, but silent, CDKL5 gene copy on the inactivated X chromosome. One strategy for treatment of girls with CDD is thus to re-activate the silent, inactivated CDKL5 gene on the inactivated X chromosome. This approach is currently in pre-clinical development.Studies of molecular pathway abnormalities in CDD rodent models may suggest additional possible therapies, such as protein substitution.
Research:
Within the research community, the Loulou Foundation's annual meetings with scientists and drug developers have become the largest conference focusing on CDD biology and therapeutic development. Contribution to research and therapeutic development by companies Takeda and Ovid Therapeutics is also vital, which was recognized by the Loulou Foundation providing Company Making a Difference Award for initiation of the Phase 2 ARCADE trial with OV935/TAK-935 to these companies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Corner Houses**
Corner Houses:
Corner Houses (Chinese: 街角樓) are a type of building located at the junction of two or three roads.
Hong Kong:
Corner houses are buildings located at junctions. In Hong Kong, buildings must meet certain specifications, which is why corner houses are so common on Hong Kong Island and Kowloon.
Corner houses originate from the Composite Buildings of Hong Kong. They were popularized in the 1950s and the 1960s. Most corner houses are fourth-generation tong lau, featuring rounded corners and lines.
Antonio Hermenegildo Basto currently holds the record for the most corner buildings designed in Hong Kong.
Locations Hong Kong Island: Wan Chai, Causeway Bay, Sai Ying Pun, Shau Kei Wan Kowloon: Sham Shui Po, Mong Kok, Tai Kok Tsui, To Kwa Wan, Cheung Sha Wan Styles Hanging signs in large facades.
Units in round corners are known as large units.
Round buildings are built in a Bauhaus style.
Types
Notable buildings:
Hong Kong 14 Nam Cheong Street (Boundary Street and Nam Cheong Street) May Wah Building (Wan Chai Road and Johnston Road Mido Cafe (Temple Street and Public Square Street New Lucky House (Nathan Road and Jordan Road) Chung Wui Mansion (Wan Chai Road, Fleming Road, and Johnston Road) Hing Wah Mansion (Babington Path, Park Road, St Stephen's Lane, and Oaklands Path) Taiwan Hayashi Department Store United States Flatiron Building (NYC) UK The Cornerhouse, Nottingham Cornerhouse (Demolished) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Arts in Second Life**
Arts in Second Life:
Arts in Second Life is an artistic area of a 3D social network (called Second Life) that has served, since 2003, as a platform for various artistic pursuits and exhibitions.
Art exhibits:
Second Life has created an environment where artists can display their works to an audience across the world. This has created an entire artistic culture where many residents display art in the museums, galleries and homes they can buy or build using Second Life's powerful tools. Gallery openings even allow art patrons to "meet" and socialize with exhibiting artists and has even led to many real life sales.
Art exhibits:
Numerous art gallery simulations (called "sims") abound in Second Life. Among the more popular galleries are the Sisse Singhs Art Gallery, the Windlight art Gallery and the Horus Art Gallery. Among the most notable of these was the art gallery sim Cetus Gallery District, the world's first virtual online urban arts district. Cetus was modeled on real world analogs such as New York's Chelsea gallery district as a mixed-use arts community of virtual galleries, offices, loft apartments, and coffee houses. Its many tenant-run businesses featured weekly live music performances, gallery openings, and literary events such as the virtual book launch for "Coming of Age in Second Life: An Anthropologist Explores the Virtually Human," by Tom Boellstorf (Princeton University Press; 2008). Cetus was chosen Best Cultural Site in Second Life in 2007, and its creator avatar Xander Ruttan (real world arts professional Aaron Collins of California), was among the most influential art world avatars in SL. Cetus resulted in many ongoing collaborative efforts among the SL community of artists, designers, writers, and virtual builders from across the real and virtual worlds. (Cetus was later bought by virtual artist DB Bailey and converted into a personal art project).
Art exhibits:
The modeling tools from Second Life allow the artists also to create new forms of art that in many ways are not possible in real life due to physical constraints or high associated costs. The virtual arts are visible in over 2050 "museums" (according to SL's own search engine).In 2008 Haydn Shaughnessy, real life gallerist, along with his wife Roos Demol hired a real life architect, New York based, Benn Dunkley to design a gallery in Second Life. Dunkleys goal was to design an interactive gallery with art in mind in a virtual world. "Ten Cubed" is a radical departure in art exhibition, a futuristically designed gallery showcasing art in a unique setting. On January 31, 2008, "Ten Cubed" was launched. For its inaugural exhibition, Crossing the Void II, owner and curator Shaughnessy selected five artists working in and with modern technologies. These artists included Chris Ashley based in Oakland, California, Jon Coffelt based in New York, New York, Claire Keating based in Cork, Ireland, Scott Kildall based in San Francisco, California and Nathaniel Stern originally based in New York, New York now in Dublin, Ireland. Real life as well as Second Life editions are available from the gallery.
Art exhibits:
The virtual creations from the metaverse are disclosed in real life by initiatives such as Fabjectory (statuettes) and Secondlife-Art.com (oil paintings).In April 2007 the huge gallery called crossworlds gallery opened its doors in Secondlife; therefore, the aim was to create an open space for art in virtual worlds. Also in 2007, artists Adam Nash, Christopher Dodds and Justin Clemens won a A$20,000 Second Life Artists in Residence grant from the Australia Council for the Arts. Their Babelswarm installation was launched in Second Life and The Lismore Regional Gallery in NSW, Australia on April 11, 2008, by Australia Council Chairman James Strong.
Art exhibits:
In 2008, the French Artist Fred Forest had entered the virtual world of Second Life to show his art project for the first time in his country. He inaugurated his "Experimental Center of the Territory of M2" ("Centre expérimental du terrioire du M2"), where he invited politicians to discuss about sustainable development and digital identity card ( Capucine.net). In another art project, he discussed about art institutions in France in his action called "l'art de la corrida".
Live music:
Live music performances in Second Life takes place in three distinctly different ways; With in-world voice chat, where the user dons a headset and microphone then enables a Second Life browse to "broadcast" his voice to other users, much like a telephone conference call.
Live music:
With streaming, where vocal and instrumental music by Second Life residents can be provided with the aid of Internet broadcast software, such as Shoutcast. This is input, via microphones, instruments or other audio sources, into computer audio interfaces and streamed live to audio servers. Similar to webcast radio, the audio stream from the live performance can be received in Second Life for the enjoyment of other Residents on their computer speakers. This started with performances by Astrin Few in May 2004 and began to gain popularity mid-2005. For example, the UK band Passenger performed on the Menorca Island in mid-2006. Another UK band, Redzone, toured in Second Life in February 2007.
Live music:
With inworld samples, where sounds samples are uploaded and an inworld user interface – instruments – is made to trigger those. Unlike streaming, performing with inworld samples make use of the Second Life environment and creates a three-dimensional sound experience to the audience. The Avatar Orchestra Metaverse is the most prolific representative with this approach.Linden Lab added an Event Category "Live Music" in March 2006 to accommodate the increasing number of scheduled events. By the beginning of 2008, scheduled live music performance events in Second Life spanned every musical genre, and included hundreds of live musicians and DJs who perform on a regular basis. A typical day in Second Life will feature dozens of live music performances.
Live music:
In 2008 the UK act Redzone announced they would release their new live album only via Second Life.Redzone also began choreographing and synchronising their performances via MIDI in October 2008.Many amateur performers start their music careers in Second Life by performing at virtual karaoke bars or Open Mic, then progress to performing for "pay", or Linden dollars, in-world.
Filming with machinima:
Second Life is popular for filming with machinima. Virtual worlds can contain all aspects of real world filming techniques as well as many more not possible in the real world. It is far easier to create 3D objects in Second Life and film them than create them from 'scratch' using traditional CGI software.
There are many machinima and performing arts groups that are active in Second Life and which participate in creative events such as the annual 48 Hour Film Project. There are also several machinima groups that actively promote the works of Second Life artists such as Machinima Mondays, Rezzed TV, MAGE Magazine and the Machinima Artist's guild.
Theater:
Live theater is presented in Second Life. The SL Shakespeare Company performed an act from Hamlet live in February 2008. In 2009, the company produced scenes from Twelfth Night.
Theater:
In 2007 Johannes von Matuschka and Daniel Michelis developed Wunderland, an interactive SL theatre play at Schaubühne am Lehniner Platz in Berlin, Germany.In 2007, HBO hosted a comedy festival in Second Life, using live streaming audio. In March 2009, SL residents staged a two-day Virtually Funny Comedy Festival to "help build awareness for Comic Relief, Red Nose Day 2009 and of course, comedy in Second Life."In December 2008, The Learning Experience, a not-for-profit virtual education campus in Second Life, staged its first live theater events with the production of two short plays, A Matter of Husbands by Ferenc Molnár and Porcelain and Pink by F. Scott Fitzgerald. In 2009, the TLE theater company began producing full-length plays in Second Life, starting with The Importance of Being Earnest by Oscar Wilde in February, and followed by Candida by George Bernard Shaw in April.In 2008 The Avatar Repertory Theater company was set up. This is another Theatre company that works within SL.In 2009 the Department of Drama at the University of Calgary mounted four short productions in the New Media Consortium theater as part of a class in performance in non-traditional spaces. These plays were (a) Guppies (by Clem Martini) in March (b) The Chocolate Affair (by Stephanie Alison Walker); (c) Kingdom of the Spider (by Nick Zagone); (d) The Boy Who Cried Genie (by D. M. Bocaz-Larson).In 2011/12, an all-furry performing arts troupe, Ravenswood Theatricals, was launched at their own venue with successful, non-commercial virtual renditions of Andrew Lloyd Webber's The Wizard of Oz and The Phantom of the Opera, the latter of which was received with glowing reviews. A number of further productions of established real-life pieces such as Les Misérables, Tanz der Vampire, Sunset Boulevard, and Into the Woods are reportedly planned, as well as a gala presentation of various musical numbers from upcoming productions.
Books:
There have been several books written about experiences in Second Life. Second Life Love is example of such a book. It is a dialog between Per Olsen en Li Gang Qin about their partnership in Second Life. The authors have never seen each other in real life.
Other books include Second Life For Dummies by Sarah Robbins and Mark Bell, which was published in 2008 and provides assistance for new users of the virtual world, including basics, how to meet people, ideas for activities and places to visit, including how to access real life education in Second Life.
Books:
Additionally there are many poetry volumes available on Lulu and in SL, including the "Blue Angel Landing" volumes 1 and 2 with 3 due in 2019 which are compilations of poems written and read by poets in Second Life. Contributing editors include Persephone Phoenix, Huckleberry Hax, Hypatia Pickens and Grail Arnica. To find a generous collection of SL poetry books, visit Klannex Northmead's poetry library.
Second Life artists:
Annabeth Robinson Annabeth Robinson, or AngryBeth Shortbread, creates online performances or installations using Second Life. For example, Robinson contributed to the 'Kritical Works in SL' project in 2008 to create a sound installation called Ping Space. This was a piece of work that involved two cubes reverberating sound from each other which would only happen when one cube was 300 ft above the other. Other such work can be found at the Annabeth Robinson page.
Second Life artists:
Garrett Lynch Garrett Lynch is an Irish new media artist working with networked technologies in a variety of forms including online art, installation, performance and writing. Since 2008 he has created a series of installation and performance works dealing with ideas of identity and place as they relate to networked spaces. In these works Lynch explores the "real" and the "virtual" through the transposing of his own identity to virtual worlds such as Second Life without any attempt to masquerade or imagine a new identity. This process involves the use of his real name for his "representation" or avatar, word play that references his names origins as both real and Irish and the use of a sandwich board prop stating this that is worn continuously.
Second Life artists:
In 2010/2011 he was artist in residence at HUMlab, Yoshikaze Up-in-the-Air Residency. Outcomes of the residence have since been published as an artists book and an article in Metaverse Creativity (Volume 2, Number 2). Currently Lynch has and is currently performing with a custom-built scale reproduction of his Second Life "representations" sandwich board. This has been worn at a number of exhibitions and performances.
Second Life artists:
Gleman Jun Gleman Jun is an Italian artist in Second Life. In the dynamic effects of colors, lights and transparencies, he expresses his creativity in a constantly evolving and transforming himself. In his case, a work of art is composed of two different elements: vision and technique. "Vision" is the image that passes through his mind suddenly. "Technique" is the experience that allows the memory to translate the vision into a "real" and shareable object.
Second Life artists:
Patrick Moya Patrick Moya (born 1955 in Troyes, France), is a Southern French artist, living in Nice on the French Riviera. He is a part of the artistic movement "Ecole de Nice". Moya has been at the forefront since the 1970s of straddling the latest forms of media and technology to benefit art rather than rendering it extinct. He is an early pioneer of video art.
Second Life artists:
Second Front Second Front is the first performance art group of Second Life. Founded in 2006, its current seven-member troupe includes Gazira Babeli (Italy), Yael Gilks (London), Bibbe Hansen (New York), Doug Jarvis (Victoria), Scott Kildall (San Francisco), Patrick Lichty (Chicago) and Liz Solo (St. Johns).Second Front members collaborate remotely and their performances have been shown live in New York, Los Angeles, Moscow, Brussels, Berlin, Vancouver and many other cities. The group has been written about in publications including Artforum, Art in America, RealTime Arts (Australia), Exibart (Italy) and Digital Art, (Second Edition) by Christiane Paul (curator).
Second Life Art group:
SL Art is one of the most popular art groups in Second Life. Its goal is to recognize the art in virtual worlds at the same level of visual art in real life. There are several second Life publications that work to promote Second Life Art, including Windlight Magazine, the SL Newser, and the SL Enquirer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Welding power supply**
Welding power supply:
A welding power supply is a device that provides or modulates an electric current to perform arc welding. There are multiple arc welding processes ranging from Shielded Metal Arc Welding (SMAW) to inert shielding gas like Gas metal arc welding (GMAW) or Gas tungsten arc welding (GTAW). Welding power supplies primarily serve as devices that allow a welder to exercise control over whether current is alternating current (AC) or direct current (DC), as well as the amount of current and voltage. Power supplies for welding processes that use shielding gas also offer connections for gas and methods to control gas flow. The operator can set these factors to within the parameters as needed by the metal type, thickness, and technique to be used. The majority of welding power supplies do not generate power, instead functioning as controllable transformers that allow the operator to adjust electrical properties as needed. However, in some welding applications, notably SMAW, used in areas isolated from power grids, welding power supplies are used that combine the functions of electrical generation and current modulation into a single mobile unit mounted on a vehicle or towed trailer.
Classification:
Welding machines are usually classified as constant current (CC) or constant voltage (CV); a constant current machine varies its output voltage to maintain a steady current while a constant voltage machine will fluctuate its output current to maintain a set voltage. Shielded metal arc welding and gas tungsten arc welding will use a constant current source and gas metal arc welding and flux-cored arc welding typically use constant voltage sources but constant current is also possible with a voltage sensing wire feeder.
Classification:
Constant current sources are used for welding operation which are performed manually like Shielded Metal Arc Welding or Gas Tungsten Arc Welding. Being manual processes, the arc length is not constant throughout the operation. This is attributed to the fact that it requires very high amount of skill to keep the hand at exactly same position above the workpiece throughout the welding. Using a constant current source makes sure that even if the arc length changes, which causes a change in arc voltage, the welding current is not changed by much and the heat input into the weld zone remains more or less constant throughout operation. The nature of the CV machine is required by gas metal arc welding and flux-cored arc welding because the welder is not able to control the arc length manually. If a welder were to attempt to use a CV machine for a shielded metal arc welding (SMAW) task, the small fluctuations in the arc distance would cause significant fluctuations in the machine's current output. With a CC machine the welder can count on a fixed number of amps reaching the material, regardless of how short or long the electric arc gets.
Power supply designs:
The welding power supplies most commonly seen can be categorized within the following types: Transformer A transformer-style welding power supply converts the moderate voltage and moderate current electricity from the utility mains (typically 230 or 115 VAC) into a high current and low voltage supply, typically between 17 and 45 (open-circuit) volts and 55 to 590 amperes. A rectifier converts the AC into DC on more expensive machines.
Power supply designs:
This design typically allows the welder to select the output current by variously moving a primary winding closer or farther from a secondary winding, moving a magnetic shunt in and out of the core of the transformer, using a series saturating reactor with a variable saturating technique in series with the secondary current output, or by simply permitting the welder to select the output voltage from a set of taps on the transformer's secondary winding. These transformer style machines are typically the least expensive.
Power supply designs:
The trade off for the reduced expense is that pure transformer designs are often bulky and massive because they operate at the utility mains frequency of 50 or 60 Hz. Such low frequency transformers must have a high magnetizing inductance to avoid wasteful shunt currents. The transformer may also have significant leakage inductance for short circuit protection in the event of a welding rod becoming stuck to the workpiece. The leakage inductance may be variable so the operator can set the output current.
Power supply designs:
Generator and alternator Welding power supplies may also use generators or alternators to convert mechanical energy into electrical energy. Modern designs are usually driven by an internal combustion engine but older machines may use an electric motor to drive an alternator or generator. In this configuration the utility power is converted first into mechanical energy then back into electrical energy to achieve the step-down effect similar to a transformer. Because the output of the generator can be direct current, or even a higher frequency AC, these older machines can produce DC from AC without any need for rectifiers of any type, or can also be used for implementing formerly-used variations on so-called heliarc (most often now called TIG) welders, where the need for a higher frequency add-on module box is avoided by the alternator simply producing higher frequency ac current directly.
Power supply designs:
Inverter Since the advent of high-power semiconductors such as the insulated gate bipolar transistor (IGBT), it is now possible to build a switched-mode power supply capable of coping with the high loads of arc welding. These designs are known as inverter welding units. They generally first rectify the utility AC power to DC; then they switch (invert) the DC power into a stepdown transformer to produce the desired welding voltage or current. The switching frequency is typically 10 kHz or higher. Although the high switching frequency requires sophisticated components and circuits, it drastically reduces the bulk of the step down transformer, as the mass of magnetic components (transformers and inductors) that is required for achieving a given power level goes down rapidly as the operating (switching) frequency is increased. The inverter circuitry can also provide features such as power control and overload protection. The high frequency inverter-based welding machines are typically more efficient and provide better control of variable functional parameters than non-inverter welding machines.
Power supply designs:
The IGBTs in an inverter based machine are controlled by a microcontroller, so the electrical characteristics of the welding power can be changed by software in real time, even on a cycle by cycle basis, rather than making changes slowly over hundreds if not thousands of cycles. Typically, the controller software will implement features such as pulsing the welding current, providing variable ratios and current densities through a welding cycle, enabling swept or stepped variable frequencies, and providing timing as needed for implementing automatic spot-welding; all of these features would be prohibitively expensive to design into a transformer-based machine, but require only program memory space in a software-controlled inverter machine. Similarly, it is possible to add new features to a software-controlled inverter machine if needed, through a software update, rather than through having to buy a more modern welder.
Power supply designs:
Other types Additional types of welders also exist, besides the types using transformers, motor/generator, and inverters. For example, laser welders also exist, and they require an entirely different type of welding power supply design that does not fall into any of the types of welding power supplies discussed previously. Likewise, spot welders require a different type of welding power supply, typically containing elaborate timing circuits and large capacitor banks that are not commonly found with any other types of welding power supplies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sideloading**
Sideloading:
Sideloading describes the process of transferring files between two local devices, in particular between a personal computer and a mobile device such as a mobile phone, smartphone, PDA, tablet, portable media player or e-reader.
Sideloading typically refers to media file transfer to a mobile device via USB, Bluetooth, WiFi or by writing to a memory card for insertion into the mobile device, but also applies to the transfer of apps from web sources that are not vendor-approved.
Sideloading:
When referring to Android apps, "sideloading" typically means installing an application package in APK format onto an Android device. Such packages are usually downloaded from websites other than the official app store Google Play. For Android users sideloading of apps is only possible if the user has allowed "Unknown Sources" in their Security Settings.When referring to iOS apps, "sideloading" means installing an app in IPA format onto an Apple device, usually through the use of a computer program such as Cydia Impactor, Xcode, on the actual device using a jailbreak method or using a signing service instead of through Apple's App Store. On modern versions of iOS, the sources of the apps must be trusted by both Apple and the user in "profiles and device management" in settings; except when using jailbreak methods of sideloading apps. Sideloading is not allowed by Apple except for internal testing and development of apps using the official SDKs.
Historical:
The term "sideload" was coined in the late 1990s by online storage service i-drive as an alternative means of transferring and storing computer files virtually instead of physically. In 2000, i-drive applied for a trademark on the term. Rather than initiating a traditional file "download" from a website or FTP site to their computer, a user could perform a "sideload" and have the file transferred directly into their personal storage area on the service. Usage of this feature began to decline as newer hard drives became cheaper and the space on them grew each year into the gigabytes and the trademark application was abandoned.
Historical:
The advent of portable MP3 players in the late 1990s brought sideloading to the masses, even if the term was not widely adopted. Users would download content to their PCs and sideload it to their players.
Today, sideloading is widespread and virtually every mobile device is capable of sideloading in one or more ways.
Advantages:
Sideloading has several advantages when compared with other ways of delivering content to mobile devices: There are no wireless data charges. Sideloading delivery does not involve a wireless carrier.
Content can be optimized for each mobile device. As there are no mobile network restrictions, content can be tailored for each device. This is more important for video playback, where the lowest common denominator is often a limiting factor on wireless networks.
There are no geographic limitations on the delivery of content for sideloading as are implicit in the limited coverage of wireless networks.
There are no restrictions on what content can be sideloaded. Users may sideload video, e-books, or software which is restricted or banned in their country, including material expressing unpopular or illegal opinions and pornography.
The content is not streamed, and can be permanently stored in the mobile device. It can be listened to or watched at the user’s convenience.
Sideloading is an excellent mechanism for proximity marketing.
Content that is removed from an online store, e.g., for belatedly discovered licensing violations, can still be loaded to a mobile device.
Disadvantages:
Sideloading also has disadvantages: Streaming media is sometimes preferred to downloading due to limited storage. Content providers limit content available to download and sideload due to their loss of control over it.
There are huge variations in performance capability for mobile devices that can make use of sideloading, from simple mobile phones with limited video playback, to high-end portable media players. Unless the audio/video file is encoded with the target device in mind, playback may not be possible.
Some wireless carriers (most notably Verizon Wireless) require that handset manufacturers limit the sideload capabilities of devices on their networks as a form of vendor lock-in. This usually results in the loss of USB and Bluetooth as sideload options (though memory card transfer is still available).
Methods:
USB sideloading Sideloading over a USB connection was standardized by OMTP in late 2007. Until this time, mobile phone manufacturers had tended to adopt proprietary USB transfer solutions requiring the use of bundled or third party cables and software.
Methods:
Unless additional software is installed on the device, the PC, or both, transfers can usually only be initiated by the PC. Once connected, the device will appear in the PC's file explorer window as either a media player or an external hard drive. Files and folders on the device may be copied to the PC, and the PC may copy files and folders to the device.
Methods:
Transfer performance of USB sideloading varies greatly, depending on the USB version supported, and further still by the actual engineering implementation of the USB controller. USB is available in Low-Speed (1.2 Mbit/s, 150KB/s), Full-Speed (12 Mbit/s, 1.5MB/s), and Hi-Speed levels, with High-Speed USB transferring up to 480 Mbit/s (60 MB/s). However, the majority of mobile phones as of the time of writing of this article are Full-Speed USB. Of the mobile products supporting USB 2.0 Hi-Speed, the actual sideloading performance usually ranges from 1 to 5 MB/s. However, the popular BlackBerry mobile phones by RIM and the iPods by Apple distance themselves at higher performing speeds of roughly 15.7 MB/s and 9.6 MB/s, respectively.
Methods:
Bluetooth sideloading Bluetooth’s OBEX/OPP profiles allow for file transfer between a PC and a mobile device. Using this option is slightly more complicated than using a USB connection as the two devices have to be paired first. Also, unlike the familiar drag and drop that is usually available via USB, Bluetooth implementation is specific to the Bluetooth transceiver and drivers being used. Files that are sideloaded to mobile devices via Bluetooth are often received as messages, in the same way that SMS texts would be received. While these files can be saved to any storage medium, their initial location is the handset’s internal memory. As such the limitations of the internal memory have to be taken into account before beginning the sideload.
Methods:
Memory card sideloading Sideloading via a memory card requires that the user have access to a memory card writer. Audio and video files can be written directly to the memory card and then inserted into the mobile device. This is potentially the quickest way of sideloading several files at once, as long as the user knows where to put the media files. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Skin infections and wrestling**
Skin infections and wrestling:
Skin infections and wrestling is the role of skin infections in wrestling. This is an important topic in wrestling since breaks in the skin are easily invaded by bacteria or fungi and wrestling involves constant physical contact that can cause transmission of viral, bacterial, and fungal pathogens. These infections can also be spread through indirect contact, for example, from the skin flora of an infected individual to a wrestling mat, to another wrestler. According to the National Collegiate Athletic Association's (NCAA) Injury Surveillance System, ten percent of all time-loss injuries in wrestling are due to skin infections.
Common forms of infection:
Bacterial infections, or pathogens, make up the largest category of include Furuncles, Carbuncles, Folliculitis, Impetigo, Cellulitis or Erysipelas, and Staphylococcal disease. These range in severity, but most are quickly identified by irritated and blotchy patches of skin. Bacterial infections, of all skin infections, are typically the easiest to treat, using a prescribed anti-bacterial lotion or crème.
Common forms of infection:
Molluscum Contagiosum is caused a DNA pox virus called the molluscum contagiosum virus. For adults, molluscum infections are often sexually transmitted, but in wrestling, it is spread either through direct contact or through contact with shared items such as gear or towels. Molluscum Contagiosum can be identified by pink bulbous growths that contain the virus. These typically grow to be 1–5 millimeters in diameter, and last from 6 to 12 months without treatment and without leaving scars. Some growths may remain for up to 4 years. Treatment for Molluscun Contagiosum must be designated by a healthcare professional because they can be dangerous. Usually for treatment liquid nitrogen can be used to freeze the molluscum off but other methods include other creams that burn the warts off, or oral medications.The herpes simplex virus comes in two different strains, though only one is spread among wrestlers. Type 1 (HSV-1) can be transmitted through contact with an infected individual, and usually associated with sores on the lips, mouth, and face. HSV-1 can also cause infection of the eye, or even infection of the lining of the brain, known as meningoencephalitis. The lesions will heal on their own in 7 to 10 days, unless the infected individual has a condition that weakens the immune system. Once an infection occurs, the virus will spread to nerve cells, where it remains for the rest of the person's life. Occasionally, the virus will suddenly display recurring symptoms, or flares. There is no complete treatment for Herpes Simplex 1 but there is prescription medication to help ease and relieve the symptoms of the virus. Antiviral oral medication and topic medication can be prescribed to relieve the pain and soreness of the herpes virus.Verrucae are small skin lesions which can be found on the bottom surface of the foot. They vary in length, from one centimeter in diameter upwards. Verrucae are caused by the human papilloma virus, which is common in all environments but does often attack the skin. The color of the lesion is usually paler then the normal tone of the skin, and is surrounded by a thick layer of calloused skin. Depending on the development of the Verrucae, the surface may show signs of blood vessels, which feed the infection.
Common forms of infection:
Tinea infections, more commonly known as Ringworm, are the most common skin infections transmitted through wrestling. It is caused by parasitic fungi that survive on keratin, an organic material that is found in skin, hair, and nails. There are several varieties of Tinea, which are classified depending on their location. Tinea corporis is found on the body, tinea cruris (jock itch) on the groin, tinea capitis on the scalp, and tinea pedis (athlete's foot) on the foot. Although they are not harmful, they are highly contagious and difficult to treat. The symptoms of ringworm include patches of skin that are red, swollen, and irritated, forming the shape of a ring. Ringworm will last between two and four weeks with treatment. Tinea infections can be combatted orally or topically with numerous different medications. Some topical treatments include Mentax 1%, Lamisil 1%, Naftin 1% and Spectazole and these creams should be applied two times a day until the infection is gone. Oral treatments for Tinea include Lamisil, Sporanox, and Diflucan.
Rules:
At the start of each wrestling meet, trained referees examine the skin of all wrestlers before any participation. During this examination, male wrestlers are to wear shorts; female wrestlers are only permitted to wear shorts and a sports bra. Open wounds and infectious skin conditions that cannot be adequately protected are considered grounds for disqualification from both practice and competition. This essentially means that the skin condition has been deemed as non-infectious and adequately medicated, covered with a tight wrapping and proper ointment. In addition, the wrestler must have developed no new lesions in the 72 hours before the examination. Wrestlers who are undergoing treatment for a communicable skin disease at the time of the meet or tournament shall provide written documentation to that effect from a physician. This documentation should include the wrestler's diagnosis, culture results (if possible), date and time therapy began, and the exact names of medication for treatment. These measures aren't always successful, and the infection is sometimes spread regardless.
Prevention:
According to the NCAA Wrestling Rules and Interpretations and NFHS Sports Medicine Advisory Committee, used by all high schools in the United States: Infection control measures, or measures that seek to prevent the spread of disease, should be utilized to reduce the risks of disease transmission. Efforts should be made to improve wrestler hygiene practices, to utilize recommended procedures for cleaning and disinfection of surfaces, and to handle blood and other bodily fluids appropriately. Suggested measures include: promotion of hand hygiene practices; educating athletes not to pick, squeeze, or scratch skin lesions; encouraging athletes to shower after activity; educating athletes not to share protective gear, towels, razors or water bottles; ensuring recommended procedures for cleaning and disinfection of wrestling mats, all athletic equipment, locker rooms, and whirlpool tubs are closely followed; and verifying clean up of blood and other potentially infectious materials.
Prevention:
More ways of prevention include wearing long sleeve shirts and sweatpants to limit the amount of skin to skin contact. A wrestler should also not share their equipment with other teammates. Body wipes are also common to see coaches must also enforce the disinfecting and sanitary cleansing of the wrestling mats and other practice areas. This can greatly limit the spread of skin infections that can infect an individual indirectly.One high school wrestling coach from Southern California described his methods of prevention using three procedures: Keep the mats [clean] ... you've got to bleach and mop them every day before practice. Along the same lines, gear should also be washed regularly, especially headgear…Most importantly, the wrestlers need to shower immediately after practices. If one kid doesn’t, and he gets [infected], it can spread to everyone else on the team within a week. I’ve had it happen before, to the point where some schools won’t allow any of our guys to wrestle in a meet. When this happens, it’s a huge blow to the school’s record and reputation. In the future, we are less likely to be invited to exclusive tournaments in the coming year.
Treatment:
For every form of contagious infection, there is a readily available form of medication that can be purchased at any pharmacy. It is a commonly held belief among wrestlers, however, that these ointments do not treat symptoms Sometimes wrestlers who don't want to report an infection to their coach will resort to unusual and unhealthy treatments. Included among these ‘home remedies’ are nail polish remover, bleach, salt, and vinegar solutions, which are used to either suffocate or burn the infection, often leaving extensive scars. The remedies, while sometimes successful, are not guaranteed to actually kill the infection, often only eliminating visible symptoms temporarily. Even though the infection may no longer be symptomatic, it can still be easily transmitted to other individuals. Because of this, it is recommended that wrestlers attempting to treat skin infections use conventional medicine, as prescribed by a physician.
Significant outbreaks:
HSV-1 (July 1989) – An outbreak of Herpes Simplex was reported at a four-week high school wrestling camp in Minneapolis, which was attended by wrestlers from 26 states and 1 Canadian province. According to a report on the outbreak: “Wrestlers wore jerseys during practice sessions, but the use of headgear was optional. Wrestling mats were mopped twice each day with disinfectant. Epidemiologic and clinical data were collected during the final two days of the camp after officials alerted the Minnesota Department of Health, which, in turn, alerted the Centers for Disease Control. Results from 171 wrestlers (of 175 attendees) showed that 35 percent (60 boys) met the case definition for HSV-1 infection.” | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Preoccipital notch**
Preoccipital notch:
About 5 centimetres (2.0 in) in front of the occipital pole of the human brain, on the infero-lateral border is an indentation or notch, named the preoccipital notch. It is considered a landmark because the occipital lobe is located just behind the line that connects that notch with the parietoccipital sulcus. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Passive treatment system**
Passive treatment system:
A passive treatment system is a method for removing metals from acid mine drainage. There are several types of passive treatment systems, each of which may be used on their own or in combination to treat effluents. The type of system selected is dependent upon the chemistry of the acid mine drainage and the flow of the discharge, as well as relevant regulations. Passive treatment systems do not require power and are less expensive than active treatment systems. They also require less maintenance, which is an advantage in remote locations.
Types of passive treatment systems:
There are many types of water treatment systems available for removing metals from acid mine drainage. Passive treatment systems are a relatively recent technology that involves using sulfate-reducing bacteria or limestone or both to neutralize acidity and precipitate metals. These systems are sometimes called “wetlands” or “bioreactors.” Passive treatment systems differ from active systems (water treatment plants), which commonly use power; use more hazardous chemicals such as hydrated lime, caustic soda, or ammonia; and, are more expensive. Passive treatment systems are preferred for sites managed by the Bureau of Land Management (BLM).Passive treatment systems provide a controlled environment in which natural chemical and biological reactions that help in the treatment of acid mine drainage can occur. There are several types of passive treatment systems. Each type may be used on its own, or more than one may be used in sequence to optimize treatment of difficult effluents. However, the design selected will ultimately depend upon site characteristics and other specific criteria.
Types of passive treatment systems:
Aerobic wetlands Aerobic wetlands are shallow (1–3 foot deep) ponds; they may be lined or unlined and some are nearly filled with soil or limestone gravel. Such wetlands facilitate natural oxidation of the metals and precipitate iron, manganese, and other metals. Anaerobic wetlands are used to neutralize acidity and reduce metals to the sulfide form. This reaction consumes H+ and therefore acidity.
Types of passive treatment systems:
Anaerobic wetlands Anaerobic wetlands may be lined or unlined shallow ponds filled with organic matter, such as compost, and underlain by limestone gravel. Water percolates through the compost, becomes anaerobic and metals precipitate as sulfides. Microorganisms facilitate this reaction by first consuming oxygen. Alkalinity and H2S are produced. If the system is improperly sized, if flow dries up, or if extended low temperatures are encountered, the microorganisms will die and the performance will be decreased. Some anaerobic wetlands discharge a sulfide “sewage” effluent, particularly during the first few years.
Types of passive treatment systems:
Anoxic limestone drains Anoxic limestone drains consist of a buried limestone gravel system that requires the exclusion of oxygen and aluminum in the water. If oxygen or aluminum are present, iron and aluminum hydroxides clog the system, causing failure. Alkalinity producing systems are a combination of an anaerobic wetland and an anoxic limestone drain.
Types of passive treatment systems:
Other types Other types of passive treatment systems include various limestone treatment configurations, ranging from limestone ponds to open limestone channels in which water flows down a steep slope with limestone riprap. These systems oxidize and precipitate metals and add alkalinity to the water.Another passive treatment system uses lime dispensing technology to neutralize acidity and precipitate metals in a settling pond. These units do not require power or hazardous chemicals and are inexpensive. BLM is currently conducting pilot tests on the Aquafix technology.
Advantages:
Passive treatment systems are a valuable option for treating acid mine drainage at remote locations. The advantages of passive treatment systems are that they do not require electrical power; do not require any mechanical equipment, hazardous chemicals, or buildings; do not require daily operation and maintenance; are more natural and aesthetic in their appearance and may support plants and wildlife; and, are less expensive than active alternatives.
Disadvantages:
There are disadvantages with any water treatment system. The disadvantages of passive treatment systems are that they may require complex discharge permits unless taking a Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) action; may not meet stringent water-quality-based effluent standards; may fail because of poor design or severe winter conditions; and, are a relatively new technology and an area of active research. For these reasons, there have been failures along with success stories.
Maintenance:
All of the passive treatment systems described will accumulate metal precipitates and will eventually have to be replaced. Research indicates that these systems can be expected to perform for 20 years. The precipitate is not normally a hazardous waste. Nonetheless, regular monitoring, inspection, and maintenance are required, although to a much lesser extent than with active water treatment systems. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Teichmüller cocycle**
Teichmüller cocycle:
In mathematics, the Teichmüller cocycle is a certain 3-cocycle associated to a simple algebra A over a field L which is a finite Galois extension of a field K and which has the property that any automorphism of L over K extends to an automorphism of A. The Teichmüller cocycle, or rather its cohomology class, is the obstruction to the algebra A coming from a simple algebra over K. It was introduced by Teichmüller (1940) and named by Eilenberg and MacLane (1948).
Properties:
If K is a finite normal extension of the global field k, then the Galois cohomology group H3(Gal(K/k,K*) is cyclic and generated by the Teichmüller cocycle. Its order is n/m where n is the degree of the extension K/k and m is the least common multiple of all the local degrees (Artin & Tate 2009, p.68). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Emotion Machine**
The Emotion Machine:
The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is a 2006 book by cognitive scientist Marvin Minsky that elaborates and expands on Minsky's ideas as presented in his earlier book Society of Mind.
The Emotion Machine:
Minsky argues that emotions are different ways to think that our mind uses to increase our intelligence. He challenges the distinction between emotions and other kinds of thinking. His main argument is that emotions are "ways to think" for different "problem types" that exist in the world, and that the brain has rule-based mechanisms (selectors) that turn on emotions to deal with various problems. The book reviews the accomplishments of AI, why modelling an AI is difficult in terms of replicating the behaviors of humans, if and how AIs think, and in what manner they might experience struggles and pleasures.
Reviews:
In a review for The Washington Post, neurologist Richard Restak states that: Minsky does a marvelous job parsing other complicated mental activities into simpler elements. ... But he is less effective in relating these emotional functions to what's going on in the brain.
Outline:
Minsky outlines the book as follows: "We are born with many mental resources." "We learn from interacting with others." "Emotions are different Ways to Think." "We learn to think about our recent thoughts." "We learn to think on multiple levels." "We accumulate huge stores of commonsense knowledge." "We switch among different Ways to Think." "We find multiple ways to represent things." "We build multiple models of ourselves."
Other reviews:
Science and Evolution - Books and Reviews Technology Review
Author's pre-publication draft:
Introduction Chapter 1. Falling in Love Chapter 2. ATTACHMENTS AND GOALS Chapter 3. FROM PAIN TO SUFFERING Chapter 4. CONSCIOUSNESS Chapter 5. LEVELS OF MENTAL ACTIVITIES Chapter 6. COMMON SENSE Chapter 7. Thinking.
Chapter 8. Resourcefulness.
Chapter 9. The Self.
BIBLIOGRAPHY | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retired husband syndrome**
Retired husband syndrome:
Retired husband syndrome (主人在宅ストレス症候群, Shujin Zaitaku Sutoresu Shoukougun, literally "One's Husband Being at Home Stress Syndrome") (RHS) is a psychosomatic stress-related illness recognized in Japanese culture which has been estimated to occur in 60% of Japan's older female population. It is claimed to be a condition where a woman begins to exhibit signs of physical illness and depression as her husband reaches, or approaches, retirement.
Common symptoms:
The following are some of the common symptoms of RHS: Depression Skin rash Asthma Ulcers High blood pressure
Theorized reason for RHS:
This syndrome was identified and coined by Nobuo Kurokawa and first appeared in a presentation of his to the Japanese Society of Psychosomatic Medicine in 1991.Kurokawa has theorized that RHS is a result of the fact that many of Japan's citizens who are reaching retirement age, 60, are members of the Baby Boomer generation of Japan. The members of this generation were expected to meet certain social requirements: that the man should be the breadwinner and work to support his family, and the woman was to be not only a homemaker but also to show a level of adoration for her salaryman husband as reward for his bringing in the money she used to look after their children and socialize with her friends.As the husband's career as a salaryman can demand long hours away from home, both working and socializing with other salarymen and their bosses as is expected, a husband may leave home in the early hours of the morning and return home late at night. This could mean that a husband and wife may not interact extensively and when a husband retires both members of the couple can feel they are living together with someone who is a virtual stranger.This can be a particularly stressful experience for the woman who, as society dictated in her youth, is expected to attend to her husband's every need and can find this a very large demand indeed. The stress this change in life style brings can lead not only to the above listed symptoms, but also to a level of resentment felt toward her husband. Some couples have been known to separate over RHS, however divorce is uncommon as it is not considered an acceptable option for that generation of Japanese. Also currently an ex-wife has no rights to a portion of her husband's pension should they divorce, and therefore may be unable to survive financially (though this was set to change in 2007). Despite this, the divorce rate among older Japanese couples has soared in recent decades, as more of the baby boomer population have retired, increasing by 26.5% in 10 years according to the health ministry. The number of divorces among couples married for 20 years or more hit 42,000 in 2004, double those recorded in 1985. Divorces among those married for more than 30 years quadrupled during the same period. In 2006, these figures were projected to rise further as more Japanese people were expected to retire in the subsequent five years than at any other point in Japanese history.Some women deal with RHS by focusing their energy on obsessions such as collecting teddy bears, or following a celebrity, which they say can help them psychologically. They may also ask their husbands to stay on at work past retirement age. Many wives do not tell their husbands what is happening and this can worsen the stress as their husbands may not understand or even realize their wives have RHS.
Research:
Marco Bertoni and Giorgio Brunello of the University of Padova published a discussion paper in July 2014 based on empirical research in Japan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scene and sequel**
Scene and sequel:
Scene and sequel are two types of written passages used by authors to advance the plot of a story. Scenes propel a story forward as the character attempts to achieve a goal. Sequels provide an opportunity for the character to react to the scene, analyze the new situation, and decide upon the next course of action.
Scene:
The concept of a scene in written fiction has evolved over many years. Dwight V. Swain, in Techniques of the Selling Writer (1965) defined a scene as a unit of conflict, an account of an effort to attain a goal despite opposition. According to Swain, the functions of a scene are to provide interest and to move the story forward. The structure of a scene, as described by Swain, is (1) goal, (2) conflict, (3) disaster.In The Art of Fiction (1983), John Gardner described a scene as having an unbroken flow of action without a lapse of time or leap from one setting to another. Over the years, other authors have attempted to improve on the definition of scene, and to explain its use and structure.
Sequel:
In addition to defining a scene, Swain described a sequel as a unit of transition that links two scenes, adding that a sequel functions to translate disaster into a goal, telescope reality, and control tempo. Swain also described the structure of a sequel as (1) reaction, (2) dilemma, and (3) decision. Other authors have attempted to improve on the definition of a sequel and to explain its use and structure.
Proactive vs. reactive:
Rather than viewing scenes and sequels as distinct types of passages, some authors express the concept as two types of scenes: proactive and reactive.
Scenes and sequels:
Swain defined, described, and explained scene and sequel as if they were separate entities, but then he explained that they must complement each other, linking together smoothly into a story. He went on to observe that An author controls pacing by the way he proportions scene to sequel.
Flexibility is important, versus a mechanical approach.
The peaks and valleys in a diagram of a story correspond to scenes and sequels.
Structural units of fiction:
The structural units of fiction writing comprise all fiction.
A chapter is a segment of writing delineated by a form of punctuation called a chapter break. Prologue and epilogue are two specialized types of chapters.
A chapter may include one or more sections, passages separated by another form of punctuation called a section break.
Scenes and sequels are specialized passages of writing. A scene is a passage of writing in which the character attempts to achieve a goal. A sequel is a passage of writing in which the character reacts reflectively to the previous scene.
Some novels, especially long ones, may be further divided into books or parts, each including two or more chapters.
The smallest units of writing are words, phrases, clauses, sentences, and paragraphs.
Two or more paragraphs with some common purpose are referred to as passages or segments of writing.
Types of passages:
Passages of writing may be classified into four groups: (1) scenes, (2) sequels, (3) passages that are neither scenes nor sequels, and (4) passages that include elements of both scenes and sequels. Examples of passages that are neither scenes nor sequels include fragments of scenes or sequels and passages of narration, description, or exposition. An example of a passage that includes elements of both scenes and sequels is the problem-solving passage, common in mystery and detective stories.
Types of scenes:
Scenes may be classified by their position within the story (such as an opening scene or a climax scene). A scene may be classified by the fiction-writing mode that dominates its presentation (as in an action scene or a dialogue scene). Some scenes have specialized roles (such as flashback scenes and flashforward scenes).
The Anatomy of a Scene:
Before a writer crafts a scene, they must know its purpose as it relates to the story, because each scene must move the plot forward. If nothing new happens, if the character has not been changed, then the scene is not effective. Each scene should be a response to the one that came before it. Something happens that makes the character react or change. It can be physically, emotionally, or both. Then the character must decide what to do next. The previous scene's ending triggers the next scene's beginning. Just like the whole story, each scene has a beginning, middle, and end. And much like the start of any story, each scene's beginning must hook the reader. The middle can't lag. Tension or conflict must rise. It doesn't need to be action-packed. Maybe there's unspoken tension between characters, internal conflict for the protagonist, or new information is discovered. The scene ends with the character processing what just happened, and their response (a reaction, a decision) sets up the beginning of the next scene. Each scene starts with an action, tensions rise, and it ends with a reaction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**DCF Interframe Space**
DCF Interframe Space:
The IEEE 802.11 family of standards describe the DCF protocol, which controls access to the physical medium. A station must sense the status of the wireless medium before transmitting. If it finds that the medium is continuously idle for DCF Interframe Space (DIFS) duration, it is then permitted to transmit a frame. If the channel is found busy during the DIFS interval, the station should defer its transmission.
DCF Interframe Space:
DIFS duration can be calculated by the following method.
DIFS = SIFS + (2 * Slot time) IEEE 802.11g is backward compatible with IEEE 802.11b. When these devices are associated with same AP all the timing parameters are changed to 802.11b. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.