text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**PlayStation Mouse**
PlayStation Mouse:
The PlayStation Mouse (US/UK: SCPH-1090, JP: SCPH-1030) is an input device for the PlayStation that allows the player to use a mouse as a method of control in compatible games. The mouse was released in Japan on December 3, 1994, the launch date of the PlayStation.The mouse itself is a simple two-button ball mouse that plugs directly into the PlayStation controller port without adapters or conversions and is a fully supported Sony accessory. It was packaged along with a mouse mat bearing the PlayStation logo.
PlayStation Mouse:
The mouse is mainly used in point-and-click adventures, strategy games, simulation games and visual novels. In later years, first-person shooters also make use of the peripheral to aim the player's view in the same manner as similar games on the PC. It is also used by the arcade lightgun shooting game Area 51 as an aiming device instead of a light gun compatibility.A special Konami-branded edition of the mouse was released alongside the Japanese exclusive title Tokimeki Memorial: Forever With You.Mouse packs for Disney's Winnie the Pooh Kindergarten and Disney's Winnie the Pooh Preschool were also released exclusively in Japan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Blood brother**
Blood brother:
Blood brother can refer to two or more men not related by birth who have sworn loyalty to each other. This is in modern times usually done in a ceremony, known as a blood oath, where each person makes a small cut, usually on a finger, hand or the forearm, and then the two cuts are pressed together and bound, the idea being that each person's blood now flows in the other participant's veins.The act carries a risk due to blood-borne diseases. The process usually provides a participant with a heightened symbolic sense of attachment with the other participant.
Cultures:
Scandinavia and Germanic Europe The Norsemen entering the pact of foster brotherhood (Icelandic: Fóstbræðralag) involved a rite in which they let their blood flow while they ducked underneath an arch formed by a strip of turf propped up by a spear or spears. An example is described in Gísla saga. In Fóstbræðra saga, the bond of Thorgeir Havarsson (Þorgeir Hávarsson) and Thormod Bersason (Þormóð Bersason) is sealed by such ritual as well, the ritual being called a leikr.Örvar-Oddr's saga contains another notable account of blood brotherhood. Örvar-Oddr, after fighting the renowned Swedish warrior Hjalmar to a draw, entered a foster brotherhood with him by the turf-raising ritual. Afterwards, the strand of turf was put back during oaths and incantations.In the mythology of Northern Europe, Gunther and Högni became the blood brothers of Sigurd when he married their sister Gudrun. In Wagner's opera Götterdämmerung, the concluding part of his Ring Cycle, the same occurs between Gunther and Wagner's version of Sigurd, Siegfried, which is marked by the "Blood Brotherhood Leitmotiv". Additionally, it is briefly stated in Lokasenna that Odin and Loki are blood brothers.
Cultures:
Scythia Among the Scythians, the covenantors would allow their blood to drip into a cup; the blood was subsequently mixed with wine and drunk by both participants. Every man was limited to having three blood brotherhoods at any time lest his loyalties be distrusted. As a consequence, blood brotherhood was highly sought after and often preceded by a lengthy period of affiliation and friendship (Lucian, Toxaris). The 4th-century BC depictions of two Scythian warriors drinking from a single drinking horn (most notably in a gold appliqué from Kul-Oba) have been associated with the Scythian oath of blood brotherhood.The Hungarian hajduks had a similar ceremony, but the wine was often replaced with milk so that the blood would be more visible.
Cultures:
East Asia In Asian cultures, the act and the ceremony of becoming blood brothers is generally seen as a tribal relationship for bringing about alliance between tribes. It was practiced for that reason most notably by the Mongols, Turkic and early Chinese.In Romance of the Three Kingdoms, the Chinese classical literature, the three main characters took an oath of blood brother, the Oath of the Peach Garden, by sacrificing a black ox and a white horse and by swearing faith. Other blood oaths involving animal sacrifice were characteristic of rebel groups, such as the uprising led by Deng Maoqi in the 1440s, of criminal organizations, such as the triads or the pirates of Lin Daoqian, and of non-Han ethnic minorities such as the Mongols and the Manchu. Genghis Khan had an anda called Jamukha. The term also exist in Old Turkic: ant ičmek ("to take an oath"), derived from the "ancient test by poison". The Turkic term, if it's not a loanword in Middle Mongol, is related to Mongol anda.
Cultures:
Philippines In the Philippines, blood compacts (sandugo or sanduguan, literally "one blood") were ancient rituals that were intended to seal a friendship or treaty or to validate an agreement. They were described in the records of the early Spanish and Portuguese explorers to the islands. The most well-known version of the ritual from the Visayan people involves mixing a drop of blood from both parties into a single cup of wine that is then drunk. Other versions also exist, like in Palawan which describes a ritual involving making a cut on the chest and then daubing the blood on the tongue and forehead.
Cultures:
Sub-Saharan Africa The blood oath was used in much the same fashion as has already been described in much of Sub-Saharan Africa. The British colonial administrator Lord Lugard is famous for having become blood brothers with numerous African chiefs as part of his political policy in Africa. A powerful blood brother was the Kikuyu chieftain Waiyaki Wa Hinga. David Livingstone wrote of a similar practice called 'Kasendi'.
Cultures:
Southeastern Europe Blood brothers among larger groups were common in ancient Southeastern Europe, where, for example, whole companies of soldiers would become one family through the ceremony. It was perhaps most prevalent in the Balkans during the Ottoman era, as it helped the oppressed people to fight the enemy more effectively. Blood brotherhoods were common in what is today Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Greece, Montenegro, Serbia and North Macedonia. Christianity also recognized sworn brotherhood in a ceremony, which was known as Greek: adelphopoiesis, Slavic languages: pobratimstvo in the Eastern Orthodox Churches and as Latin: ordo ad fratres faciendum in the Catholic Church. The tradition of intertwining arms and drinking wine is also believed to be a representation of becoming blood brothers.
Famous blood brothers:
Historical In the 9th century AD, chiefs of the seven Hungarian tribes formed an alliance drinking from each other's blood, and chose Álmos as leader.
In 1066, Robert d'Ouilly and Roger d'Ivry, two Norman knights taking part in the Norman Conquest of England were known as blood brothers. It was said they had agreed beforehand to share profits of this adventure. Both survived the Battle of Hastings, were granted lands in Oxfordshire and elsewhere, then worked together on various projects such as Wallingford Castle.
In the 12th century AD, the Mongol leaders Yesükhei (father of Temüjin) and Toghrul (later ally of Temüjin) were blood brothers.
Temüjin (Genghis Khan) and Jamukha were childhood friends and blood brothers, although Jamukha later betrayed Temüjin. Jamukha refused reconciliation and thus was executed at orders of Temüjin.
In Medieval Serbia, his two blood brothers Ivan Kosančić and Milan Topličanin accompanied Miloš Obilić prior to the Battle of Kosovo.
In the 18th century AD, emissaries of British King George III and leaders of the Jamaican Maroons reportedly drank each other's blood when conducting peace treaties.
Blood brothers in the Serbian Revolution (1804–17): rebel leader Karađorđe (1762–1817) and commander Milutin Savić (1762–1842); Karađorđe and Greek volunteer Giorgakis Olympios (1772–1821); commander Hajduk-Veljko (1780–1813) and Giorgakis Olympios; commanders Stojan Čupić (1765–1815) and Bakal-Milosav; commanders Cincar-Janko (1779–1833), Miloš Pocerac (1776–1811) and Anta Bogićević (1758–1813).
Blood brothers in the later Principality of Serbia: Prince Milan Obrenović (1854–1901) and Milan Piroćanac (1837–1897); Aćim Čumić (1836–1901) and Kosta Protić (1831–1892); Đura Jakšić (1832–1878) and Stevan Vladislav Kaćanski (1829–1890).
In the Greek War of Independence (1821–30), Greek Nikolaos Kriezotis and Montenegrin Vaso Brajević were said to be blood brothers.
Samoan wrestler "High Chief" Peter Maivia was considered a blood brother of Amituanai Anoa'i, father of fellow wrestlers Afa and Sika Anoa'i, renown as the Wild Samoans. Thus from that time onwards the Anoa'i family regard the Maivia line as extension of their own clan.
Folklore The Norse gods Loki and Odin are famously stated to have mixed blood in days of old in Lokasenna. This has been taken as an explanation why Loki is at all tolerated by the gods.
Famous blood brothers:
Liu Bei, Guan Yu and Zhang Fei. In the historical novel Romance of the Three Kingdoms by Luo Guanzhong, these three men swore in their famous Oath of the Peach Garden that despite not being born on the same day, their sworn brotherhood would end with them dying on the same day. Histories only mention that the three men were "close like brothers".
Famous blood brothers:
In the Chinese tale Journey to the West, Sun Wukong (the Monkey King) became blood brothers with Niu Mowang (the Bull Demon King), but later on this brother relationship was forgotten because of a conflict that occurred involving the bull demon's son that caused other problems for Wukong.
In Serbian epic poetry, there are several blood brotherhoods. Miloš Obilić with Milan Toplica and Ivan Kosančić, Miloš Obilić with Prince Marko, Miloš Obilić with the Jugović brothers, Despot Vuk Grgurević and Dmitar Jakšić.
Literature Winnetou and Old Shatterhand in works of Karl May.
The characters Edward Lyons and Mickey Johnstone in Willy Russell's Blood Brothers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Li's criterion**
Li's criterion:
In number theory, Li's criterion is a particular statement about the positivity of a certain sequence that is equivalent to the Riemann hypothesis. The criterion is named after Xian-Jin Li, who presented it in 1997. In 1999, Enrico Bombieri and Jeffrey C. Lagarias provided a generalization, showing that Li's positivity condition applies to any collection of points that lie on the Re(s) = 1/2 axis.
Definition:
The Riemann ξ function is given by ξ(s)=12s(s−1)π−s/2Γ(s2)ζ(s) where ζ is the Riemann zeta function. Consider the sequence log ξ(s)]|s=1.
Definition:
Li's criterion is then the statement that the Riemann hypothesis is equivalent to the statement that λn>0 for every positive integer n .The numbers λn (sometimes defined with a slightly different normalization) are called Keiper-Li coefficients or Li coefficients. They may also be expressed in terms of the non-trivial zeros of the Riemann zeta function: λn=∑ρ[1−(1−1ρ)n] where the sum extends over ρ, the non-trivial zeros of the zeta function. This conditionally convergent sum should be understood in the sense that is usually used in number theory, namely, that lim Im (ρ)|≤N.
Definition:
(Re(s) and Im(s) denote the real and imaginary parts of s, respectively.) The positivity of λn has been verified up to 10 5 by direct computation.
Proof:
Note that |1−1ρ|<1⇔|ρ−1|<|ρ|⇔Re(ρ)>1/2 Then, starting with an entire function f(s)=∏ρ(1−sρ) , let ϕ(z)=f(11−z) .ϕ vanishes when 11−z=ρ⇔z=1−1ρ . Hence, ϕ′(z)ϕ(z) is holomorphic on the unit disk |z|<1 iff |1−1ρ|≥1⇔Re(ρ)≤1/2 Write the Taylor series ϕ′(z)ϕ(z)=∑n=0∞cnzn . Since log log log log (1−z) we have ϕ′(z)ϕ(z)=∑ρ11−z−11−1ρ−z so that cn=∑ρ1−(1−1ρ)−n−1=∑ρ1−(1−11−ρ)n+1 .Finally, if each zero ρ comes paired with its complex conjugate ρ¯ , then we may combine terms to get The condition Re(ρ)≤1/2 then becomes equivalent to lim sup n→∞|cn|1/n≤1 . The right-hand side of (1) is obviously nonnegative when both n≥0 and |1−11−ρ|≤1⇔|1−1ρ|≥1⇔Re(ρ)≤1/2 . Conversely, ordering the ρ by |1−11−ρ| , we see that the largest |1−11−ρ|>1 term ( ⇔Re(ρ)>1/2 ) dominates the sum as n→∞ , and hence cn becomes negative sometimes.
Proof:
P. Freitas (2008). "a Li–type criterion for zero–free half-planes of Riemann's zeta function". arXiv:math.MG/0507368.
A generalization:
Bombieri and Lagarias demonstrate that a similar criterion holds for any collection of complex numbers, and is thus not restricted to the Riemann hypothesis. More precisely, let R = {ρ} be any collection of complex numbers ρ, not containing ρ = 1, which satisfies Re (ρ)|(1+|ρ|)2<∞.
A generalization:
Then one may make several equivalent statements about such a set. One such statement is the following: One has Re (ρ)≤1/2 for every ρ if and only if Re [1−(1−1ρ)−n]≥0 for all positive integers n.One may make a more interesting statement, if the set R obeys a certain functional equation under the replacement s ↦ 1 − s. Namely, if, whenever ρ is in R, then both the complex conjugate ρ¯ and 1−ρ are in R, then Li's criterion can be stated as: One has Re(ρ) = 1/2 for every ρ if and only if ∑ρ[1−(1−1ρ)n]≥0 for all positive integers n.Bombieri and Lagarias also show that Li's criterion follows from Weil's criterion for the Riemann hypothesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CTU2**
CTU2:
CTU2 (formerly known as C16orf84) is a human gene located on chromosome 16. The mRNA encodes the longer isoform. The gene encodes a cytoplasmic protein that plays a probable role in tRNA modification. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Elementary sentence**
Elementary sentence:
In mathematical logic, an elementary sentence is one that is stated using only finitary first-order logic, without reference to set theory or using any axioms which have consistency strength equal to set theory.
Saying that a sentence is elementary is a weaker condition than saying it is algebraic. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PrrB/RsmZ RNA family**
PrrB/RsmZ RNA family:
The PrrB/RsmZ RNA family are a group of related non-coding RNA molecules found in bacteria. PrrB RNA is able to phenotypically complement gacS and gacA mutants and is itself regulated by the GacS-GacA two-component signal transduction system. Inactivation of the prrB gene in Pseudomonas fluorescens F113 resulted in a significant reduction of 2, 4-diacetylphloroglucinol (Phl) and hydrogen cyanide (HCN) production, while increased metabolite production was observed when prrB was overexpressed. The prrB gene sequence contains a number of imperfect repeats of the consensus sequence 5′-AGGA-3′, and sequence analysis predicted a complex secondary structure featuring multiple putative stem-loops with the consensus sequences predominantly positioned at the single-stranded regions at the ends of the stem-loops. This structure is similar to the CsrB and RsmB regulatory RNAs (CsrB/RsmB RNA family), suggesting this RNA also interacts with a CsrA-like protein.
PrrB/RsmZ RNA family:
Studies in Legionella pneumophila have shown that the ncRNAs RsmY and RsmZ together with the proteins LetA and CsrA are involved in a regulatory cascade. Also, it appears that these ncRNAs are regulated by RpoS sigma-factor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bombay mix**
Bombay mix:
Bombay mix or Chanachur is an Indian snack mix (namkeen) which consists of a variable mixture of spicy dried ingredients, such as fried lentils, peanuts, chickpeas, chickpea flour ganthiya, corn, vegetable oil, puffed rice, fried onion and curry leaves. This is all flavored with salt and a blend of spices that may include coriander and mustard seeds.
Variations:
Alternative, regional versions include: In Malaysia and Singapore, it is known as kacang putih. Members of the local Indian community usually refer to it as "mixture" as is done in the southern India. It is available from roadside vendors as well as shops and restaurants. Singaporean supermarket FairPrice refer to their Bombay mix as murukku, which is an entirely different product altogether.
Variations:
In southern states such as Tamil Nadu and Kerala, as well as in the north of Sri Lanka, it is known as just "mixture", and is available in almost all the sweet shops and bakeries. Usually it consists of fried peanuts, thenkuzhal, kara boondhi, roasted chana dal, karasev, murukku broken into small pieces, pakoda and oma podi. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**TableCurve 3D**
TableCurve 3D:
TableCurve 3D is a linear and non-linear surface fitting software package for engineers and scientists that automates the surface fitting process and in a single processing step, fits and ranks about 36,000 out of over 450 million built-in frequently encountered equations, enabling users to find the ideal model to their 3D data.
Once the user has selected the best fit equation, they can output function and test programming codes or generate reports and publication quality graphs.
TableCurve 3D was developed by Ron Brown of AISN Software. TableCurve 3D 1.0 was introduced to the scientific market in September 1993. Version 1.0 was a Windows based 16-bit product. In February 1995, the 32-bit version 2.0 was released.
It was initially distributed by Jandel Scientific Software but by January 2004, Systat Software acquired the exclusive worldwide rights from SPSS, Inc. to distribute SigmaPlot and other Sigma Series products. SYSTAT Software is now based in San Jose, California. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Standard L-function**
Standard L-function:
In mathematics, the term standard L-function refers to a particular type of automorphic L-function described by Robert P. Langlands.
Here, standard refers to the finite-dimensional representation r being the standard representation of the L-group as a matrix group.
Relations to other L-functions:
Standard L-functions are thought to be the most general type of L-function. Conjecturally, they include all examples of L-functions, and in particular are expected to coincide with the Selberg class. Furthermore, all L-functions over arbitrary number fields are widely thought to be instances of standard L-functions for the general linear group GL(n) over the rational numbers Q. This makes them a useful testing ground for statements about L-functions, since it sometimes affords structure from the theory of automorphic forms.
Analytic properties:
These L-functions were proven to always be entire by Roger Godement and Hervé Jacquet, with the sole exception of Riemann ζ-function, which arises for n = 1. Another proof was later given by Freydoon Shahidi using the Langlands–Shahidi method. For a broader discussion, see Gelbart & Shahidi (1988). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cork grease**
Cork grease:
Cork grease is a lubricant for woodwind and reed instruments such as saxophones, clarinets, bassoons, and oboes. These instruments are designed to be disassembled into parts for easy storage and portability, and the joints between parts feature cork seals. Cork grease is used on these seals to ease and lubricate instrument assembly, avoiding damage to the cork and the instrument's barrel. Cork grease also acts as a preservative, keeping the wooden cork moist and thick, in turn ensuring a good seal between parts of the instrument so that no air may leak through the joints upon playing. Cork grease can help woodwind players adjust their instruments' tuning pieces (e.g. barrels, necks, bocals, staples) in respect to their pitch.Cork grease is made with ingredients such as elm extract, organic sunflower oil, coconut oil and hemp seed oil. In the past it was made from animal fat. It is not toxic to humans. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**International Journal of Surgical Pathology**
International Journal of Surgical Pathology:
International Journal of Surgical Pathology is a peer-reviewed academic journal that publishes papers in the field of Pathology. The journal's editor is Cyril Fisher, M.D. It has been in publication since 1993 and is currently published by SAGE Publications.
Scope:
International Journal of Surgical Pathology publishes original research and observations in major organ systems. The journal also contains reviews of new techniques and procedures, discussions of controversies in surgical pathology and case reports. International Journal of Surgical Pathology provides an international forum for the discussion and debate of basic and applied human studies.
Abstracting and indexing:
International Journal of Surgical Pathology is abstracted and indexed in, among other databases: SCOPUS, and the Social Sciences Citation Index. According to the Journal Citation Reports, its 2017 impact factor is 1.188, ranking it 146 out of 200 journals in the category ‘Surgery’. and 64 out of 79 journals in the category ‘Pathology’. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KDevelop**
KDevelop:
KDevelop is a free and open-source integrated development environment (IDE) for Unix-like computer operating systems and Windows. It provides editing, navigation and debugging features for several programming languages, and integration with build automation and version-control systems, using a plugin-based architecture.KDevelop 5 has parser backends for C, C++, Objective-C, OpenCL and JavaScript/QML, with plugins supporting PHP, Python 3 and Ruby. Basic syntax highlighting and code folding are available for dozens of other source-code and markup formats, but without semantic analysis.
KDevelop:
KDevelop is part of the KDE project, and is based on KDE Frameworks and Qt. The C/C++ backend uses Clang to provide accurate information even for very complex codebases.
History:
KDevelop 0.1 was released in 1998, with 1.0 following in late 1999. 1.x and 2.x were developed over a period of four years from the original codebase.
It is believed that Sandy Meier originated KDevelop. Ralf Nolden is also known to be an early developer of the project. In 1998 Sandy Meier started KDevelop and worked 8 weeks alone on this project. Since then, the KDevelop IDE is publicly available under the GPL and supports many programming languages.
Bernd Gehrmann started a complete rewrite and announced KDevelop 3.x in March 2001. Its first release was together with K Desktop Environment 3.2 in February 2004, and development of KDevelop 3.x continued until 2008.
History:
KDevelop 4.x, another complete rewrite with a more object-oriented programming model, was developed from August 2005 and released as KDevelop 4.0.0 in May 2010. The last feature update of this branch was version 4.7.0 in September 2014, with bugfix releases continuing until KDevelop 4.7.4 in December 2016KDevelop 5 development began in August 2014 as a continuation of the 4.x codebase, ported to Qt5 and KDE Frameworks 5. The custom C++ parser used in earlier versions, which had poor support for C++11 syntax, was replaced by a new Clang-based backend. The integrated CMakeFile interpreter was also removed in favour of JSON metadata produced by the upstream CMake tool.
History:
Semantic language support was added for QML and JavaScript, using the parser from Qt Creator, alongside a new QMake project-manager backend.The first stable 5.x release was KDevelop 5.0.0 in August 2016. In October 2016, official Microsoft Windows builds were released for the first time.
Features:
KDevelop uses an embedded text editor component through the KParts framework. The default editor is KDE Advanced Text Editor, which can optionally be replaced with a Qt Designer-based editor. This list focuses on the features of KDevelop itself. For features specific to the editor component, see the article on Kate.
Source code editor with syntax highlighting and automatic indentation (Kate).
C/C++ language is now supported with a Clang's backend (as of KDevelop-5.0) Project management for different project types, such as Automake, CMake, qmake for Qt based projects and Ant for Java based projects.
Class browser.
GUI designer Front-end for the GNU Compiler Collection and GNU Debugger.
Wizards to generate and update class definitions and application framework.
Automatic code completion (C/C++).
Built-in Doxygen support.
Features:
Revision control (also known as SCM) support. Supported systems include CVS, Subversion, Perforce, ClearCase, Git, Mercurial, and BazaarKDevelop 4 is a completely plugin-based architecture. When a developer makes a change, they only must compile the plugin. There is a possibility to keep several profiles each of which determines which plugins to be loaded. KDevelop does not come with a text editor, but instead uses a plugin for this purpose as well. KDevelop is programming language independent and build system-independent, supporting KDE, GNOME, and many other technologies such as Qt, GTK+, and wxWidgets.
Features:
KDevelop has supported a variety of programming languages, including C, C++, Python, PHP, Java, Fortran, Ruby, Ada, Pascal, SQL, and Bash scripting. Supported build systems include GNU (automake), cmake, qmake, and make for custom projects (KDevelop does not destroy user Makefiles if they are used) and scripting projects which don't need one.
Code completion is available for C and C++. Symbols are kept in a Berkeley DB file for quick lookups without re-parsing. KDevelop also offers a developer framework which helps to write new parsers for other programming languages.
An integrated debugger allows graphically doing all debugging with breakpoints and backtraces. It even works with dynamically loaded plugins unlike command line GDB.
Quick Open allows quick navigation between files.
Currently, around 50 to 100 plugins exist for this IDE. Major ones include persistent project-wide code bookmarks, Code abbreviations which allow expanding text quickly, a Source formatter which reformats code to a style guide before saving, Regular expressions search, and project-wide search/replace which helps in refactoring code. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Estill Voice Training**
Estill Voice Training:
Estill Voice Training (often abbreviated EVT) is a program for developing vocal skills based on analysing the process of vocal production into control of specific structures in the vocal mechanism. By acquiring the ability to consciously move each structure the potential for controlled change of voice quality is increased.The system was established in 1988 by American singing voice specialist Jo Estill, who had been researching in this field since 1979. Estill's research led to a series of vocal manoeuvres to develop specific control over individual muscle groups within the vocal mechanism. Soto-Morettini quotes Estill as saying the great strength of her method is that it can be used for any style of music, and speech and language therapists describe the exercises as valuable to voice therapy as well as singing, in both professional and non-professional voice use, offering an approach for therapeutic intervention. Estill Voice Training is a trademark of Estill Voice International, LLC.
Operating principles:
Power, Source and Filter: Estill Voice Training partitions the vocal system into the three components power, source and filter extending the existing source-filter model of speech production. 'Power' is the source of energy producing the sound (typically the respiratory system causing air to be expelled from the lungs). 'Source' is the component that vibrates to create the sound waves (the vocal folds). 'Filter' is the shaping of the sound waves to create the final result (the vocal tract). The focus of Estill Voice Training is on the source and filter components of the vocal system and the interactions between them.Craft, Artistry and Performance Magic: Estill Voice Training separates the use of voice into the 'craft' of having control over the vocal mechanism, the 'artistry' of expression relative to the material and context, and the 'performance magic' of a speaker or singer connecting with their audience. Estill Voice Training has a focus on the 'craft' aspect and hence has also been known as Estill Voice Craft by some practitioners.Effort Levels: Estill Voice Training uses the identification and quantification of the level of work or 'effort' required for speaking and singing to help develop kinesthetic feedback. This approach enables a speaker or singer to recognize, locate and control the degree of effort involved in voice production.Dynamical Systems Theory and Attractor States: The human vocal system is extremely complex, involving interactions between breath flow, moving structures, resonators and so on. Estill Voice Training draws on a branch of applied mathematics known as dynamical systems theory that helps to describe complex systems. One key concept Estill Voice Training takes from dynamical systems theory is the notion that complex systems can have attractor states. Attractor states are states to which a complex system tends towards, or is attracted to, over time. When applied to the human vocal system, Estill Voice Training proposes there are configurations of the vocal system that are attractor states, which the speaker or singer uses habitually or tend towards. For example, a subject whose attractor state is for their velum (also known as the soft palate) to be in a raised position may find it requires more conscious effort to create a nasal sound than someone else whose attractor state is for their velum to be in the lowered position.
Figures for voice:
In Estill Voice Training there are thirteen vocal exercises or 'Figures for Voice' (named after the 'compulsory figures' that figure skaters use to demonstrate proficiency). Each exercise or 'figure' establishes control over a specific structure of the vocal mechanism, in isolation, by moving the structure through a number of positions. For example, the figure for velum (soft palate) control involves moving the velum through raised, partially lowered and lowered positions. The thirteen Figures for Voice are: True Vocal Folds: Onset/Offset Control False Vocal Folds Control True Vocal Folds: Body-Cover Control Thyroid Cartilage Control Cricoid Cartilage Control Larynx Control Velum Control Tongue Control Aryepiglottic Sphincter Control Jaw Control Lips Control Head and Neck Control Torso ControlThese Figures for Voice exercises have a focus basic anatomy and vocal physiology, a knowledge of which helps encourage deductions on reducing constriction and healthy voice decisions. Janice Chapman, the operatic singer, voice teacher and researcher, writes "Estill figures lead to a much greater freedom and flexibility in the demanding work of the singer and actor."Figures for Voice are taught on the course 'Level One: Figures for Voice' that typically lasts three days. In addition to the thirteen Figures for Voice, Estill Voice Training also includes the 'Siren' exercise where a sound is produced across the entire vocal range. Other figures are historically part of the model including vocal fold mass which is now part of true vocal fold body-cover control, vocal fold plane which is now part of true vocal folds body-cover control and exercises for falsetto quality, and pharyngeal width which is now part of false vocal folds control and head and neck control.
Figures for voice:
True Vocal Folds: Onset/Offset Control: In this figure there are three options for coordinating expiration and vocal fold closure: glottal where the vocal folds are closed before expiration, smooth where vocal fold closure is synchronised with expiration, and aspirate where expiration precedes vocal fold closure. Learning to produce and apply different onsets marks the beginning of control over the vocal mechanism.False Vocal Folds Control: Estill Voice Training identifies three possible positions of the false vocal folds: constricted, mid and retracted. This figure is helpful in identification of glottal and ventricular constriction. Its concepts and options are valuable to voice therapy as well as singing. The silent laugh technique, developed into an exercise by Jo Estill, is widely cited as reducing false vocal fold constriction.True Vocal Folds: Body-Cover Control: The 'body-cover theory' of vocal fold structure was introduced by Hirano in 1977. This figure demonstrates the controlled use of the vocal folds in four body-cover configurations: on the thick edge, on the thin edge, in a stiff mode, or in a slack mode. These body-cover configurations change or modify the vibratory modes of the true vocal folds and, within the dynamical system of the human voice, effect the intensity of the sound produced and contribute to what are commonly labeled as the different human vocal registers. This figure was formerly known as vocal fold mass.Thyroid Cartilage Control: This figure demonstrates control of the position or tilt of the thyroid cartilage through engagement or disengagement of the cricothyroid muscle. The speaker or singer can tilt the thyroid cartilage by adopting the posture of crying or sobbing, or making a soft whimpering noise, like a small dog whining. In Estill Voice training, it is proposed that the position of the thyroid cartilage influences not only pitch but also the quality and intensity of the sound produced.Cricoid Cartilage Control: This figure demonstrates control of the position of the cricoid cartilage. In Estill Voice training it is proposed that specific positioning of the cricoid cartilage is a typical part of the vocal set-up for shouting and other high-intensity voice productions employing higher subglottic pressure.Larynx Control: This figure trains raising and lowering of the larynx influencing resonance. This figure was formerly known as the larynx height figure.
Figures for voice:
Velum Control: This figure trains the velum (also known as the soft palate) and consists of exercises opening, partially closing and completely closing the velopharyngeal port to control the degree of nasality in the voice. Dinah Harris writes, "Estill has excellent exercises for learning palatal control."Tongue Control: This figure demonstrates the influences of different tongue postures, such as compressed. As a practical example, Diane Sheets (Estill Voice Training Certified Course Instructor) worked on the interaction of tongue and larynx when dealing with the vocal problems of Marty Roe, lead vocalist of Diamond Rio. Control of the tongue can have subtle resonance changes and give greater flexibility to the range.Aryepiglottic Sphincter Control: This figure demonstrates the ability to control twang in the voice through conscious anteroposterior narrowing of the aryepiglottic sphincter in the upper epilarynx while avoiding constriction of the false vocal folds. Estill suggests that this laryngeal tube creates a separate resonator that is responsible for the extra brightness in phonation.Jaw Control: The jaw figure demonstrates the subtle resonance changes in voice production that are associated with different positions or postures of the jaw.Lips Control: This figure demonstrates various lip postures employed by speakers and singers and their subtle impact on vocal resonance through changing the length of the vocal tract.Head and Neck Control: Head and neck anchoring involves bracing the skeletal structures of the head and neck gives a stable external framework for the smaller muscles that control the vocal tract.Torso Control: Torso anchoring stabilises the body and breath. Gillyanne Kayes writes, 'Techniques for anchoring the tone have been described over the centuries by singers and teachers under a variety of names: support, singing from the back, singing from the neck, appoggiare, rooting, grounding and connecting the voice. In the Estill training model, I believe these techniques have been correctly identified as postural anchoring.'
Voice qualities:
Estill Voice Training incorporates six 'voice qualities' as mechanisms for demonstration of voice production control. The increased control developed through proficiency in the different Figures for Voice allows the singer or speaker to manipulate the vocal mechanism specifically to produce these arbitrary voice qualities, and variations on them. Essentially these voice qualities, such as 'Sob Quality' and 'Belt Quality', are constructed from moving the structures of the vocal mechanism into specific positions or combinations. For example, Sob Quality includes a low larynx position (the larynx figure) and thin vocal folds (the true vocal fold body & cover figure). The six voice qualities are: Speech Falsetto Sob Twang (Oral and Nasal variations) Belting OperaVoice qualities are taught on the course 'Level Two: Figure Combinations for Six Voice Qualities' that typically lasts two days.
Voice qualities:
Speech: Speech quality is often termed modal speech by voice scientists or chest voice by singers. Speech quality includes thick vocal folds and a neutral larynx position.Falsetto: In Estill Voice Training terminology, the term falsetto has a meaning distinct from falsetto as a male vocal register in Western classical terminology.Sob: Sob quality is a soft and dark sound, associated with the sobbing cry of an adult who mourns. Sob quality is produced on a lowered larynx and thinned vocal folds. Sob quality releases glottal hyperadduction and medial compression, lowers the larynx and releases pharyngeal constriction. Mary Hammond says that young performers find low larynx and sob quality less familiar. Cry quality is a permutation of sob quality adopting a higher laryngeal position.Twang: The key to twang quality is a narrowing of the epilarynx via a narrowing or constriction of the aryepiglottic sphincter. Twang quality has been used by speakers and singer to boost vocal resonance or 'squillo' and is referred to as the speaker's ring or singer's formant. The quality is excellent when teaching safe shouting and at cutting through background noise, increasing clarity of the voice, and is taught to both singers and actors to enable them to be heard clearly in large auditoria without vocal strain. Twang quality may be nasalized or oral, as differentiated by an open or closed velopharyngeal port. Estill suggests setting the vocal tract initially by imitating a cat yowling, ducks quacking, and other exercises.Opera: Opera quality is a complex set-up including a mix of speech quality and twang quality with a tilted thyroid cartilage, lowered larynx.Belting: Belting or belt quality is a complex setup combining speech quality, twang quality, a tilted cricoid cartilage and raised larynx. Twang is an important component in belt quality. Gillyanne Kayes writes, 'Belting is not harmful if you are doing it right. Jo Estill has described it as "happy yelling".' Belt quality also uses clavicular breathing and has the longest closed phase with the highest subglottic pressure and the greatest glottic resistance.
Certification:
Estill Voice International governs the Estill Voice Training Certification Programme. There are three forms of Estill Voice Training certification available for individuals: Estill Figure Proficiency (EFP) is awarded to individuals who can demonstrate the basic options for voice control taught in Estill Voice Training™ Level One (Figures for Voice Control), and Level Two (Figure Combinations for Six Voice Qualities) courses with appropriate Hand Signals.
Certification:
Estill Master Trainer (EMT) qualifies an individual to teach Estill Voice Training within their private studio, course practice sessions or classroom setting. The certification is a two-stage examination including written and voice control components, and observed teaching.
Estill Mentors and Course Instructors (EMCI) follows Estill Master Trainer, qualifying an individual to teach Estill Voice Training in public courses, seminars and conferences. The certification is a two-stage examination including written and oral components and observed presentations.
Influence, adoption and application:
Estill Voice Training has been adopted by voice professionals worldwide and a list of certified instructors is published by Estill Voice International. Joan Melton describes the Estill Voice Training terminology as a part of the language of singing teachers in Australia, with terms such as twang and anchoring in common use, although "the Estill language is heard somewhat less frequently in the UK and only occasionally in the United States." Freelance voice teacher and speech and language therapist Christina Shewell writes, "Estill Voice Training clarifies many of the complex vocal tract options that shape the style of a singers voice, explaining and demonstrating different combinations of structural conditions, and many singing teachers use the system as part of their teaching." The following list gives some examples of the application of Estill Voice Training in a range of disciplines: Pop Singing: Maureen Scott is a Certified Master Teacher whose clients include Mika and The Enemy.
Influence, adoption and application:
Country Singing: Diane Sheets is a Certified Course Instructor whose clients have included Marty Roe of Nashville Country Band Diamond Rio.
Acting: Estill Voice Training has been integrated into the training of actors at Mountview Academy of Theatre Arts in London.
Influence, adoption and application:
Musical Theatre: Faculty teaching on Musical Theatre training courses reference their Estill Voice Training certification. Examples include Steven Chicurel, Certified Course Instructor with testing privileges and service distinction, who is an associate professor of theatre at the University of Central Florida, and Anne-Marie Speed, Certified Course Instructor with testing privileges and service distinction, who teaches spoken voice on the Musical Theatre course at the Royal Academy of Music in London.
Influence, adoption and application:
Educational Curriculum: Educational institutions have adopted Estill Voice Training terminology and exercises into their curriculum. Examples include the Drama Centre at Flinders University in Adelaide, South Australia, where the Estill-based vocal technique is taught; London College of Music in its guidelines on the suggested development of vocal technique, as part of the music theatre syllabus, uses Estill Voice Training terminology; Saint Mary's College of California incorporates Estill and EFP preparation as a part of its undergraduate Music major; Motherwell College, Scotland, includes Estill Voice Training in its BA Honours Musical Theatre and BA Honours Acting programmes; and at the prestigious Bird College in London. and the Voice Performance and Musical Theatre programmes at Mars Hill College, North Carolina, include Estill Voice Training in their curriculum.
Influence, adoption and application:
Clinical Voice Therapy: Dinah Harris, contributor to The Voice Clinic Handbook, recommends learning Estill Voice Training as it provides many useful tools for those working in a voice clinic. Rattenbury, Carding and Finn present a study that used a range of Figures for Voice exercises as prognostic indicators and voice therapy treatment techniques.
Influence, adoption and application:
Community Choirs: Thomas Lloyd, Artistic Director of the Bucks County Choral Society, writes that he has "seen and heard results related to sound, dynamic range, consistency of support, and vocal color with [his] choirs, especially with [his] untrained singers."Soto-Morettini writes that, 'although the Estill method can be very complex, there are a number of simple things that students can learn quickly — and that these simple things can go a long way towards clearing up the confusion that attends some vocal training.'
Criticism:
Estill Voice Training has been criticised for not including 'breathing' and the related abdominal support within the system, and some of the uses of anchoring for classical singing, although Shewell cites Jo Estill as suggesting breath work as unnecessary if the Figures for Voice are well practiced. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Western Digital My Book**
Western Digital My Book:
My Book is a series of external hard drives produced by Western Digital. There are currently nine series of My Book drives; Essential Edition, Home Edition, Office Edition, Mirror Edition, Studio Edition, Premium Edition, Elite Edition, Pro Edition, AV Dvr "Live Edition", and the World Edition.
Western Digital My Book:
My Book drives are designed to look like a standard black hardback book, with the exception of the Pro/Studio series, which are silver, and the World series, which are white. Other than the book-like appearance of the drive's case, My Book drives originally featured vent holes on the top of the drives which spelled out a message in Morse code.
Models:
Essential Edition In addition to the book-like design, the My Book Essential Edition drives have an Intelligent Power Management feature that stops the drive platters after ten minutes of inactivity, rather than the usual expedient of slowing them down. The unit also turns on and off with the computer it is attached to.
Essential Edition My Book drives are almost entirely black, with the exception of a single blue light, used to indicate power and activity, or a circular green light that is located on the front of the drive. The older model has a white light.
"Essential Edition" drives used USB 2.0 for connectivity are available in capacities of 80 GB, 160 GB, 250 GB, 320 GB, 400 GB, 500 GB and 750 GB.
Premium Edition Premium Edition drives are similar to the Essential Edition model but also include FireWire 400 ports, an integrated visual capacity gauge and Western Digital backup software.
Models:
Premium Edition My Book drives have the same black case as Essential Edition drives; however, the light surrounding the power button is blue. Also, inside the standard blue light is another blue ring light that contains eight individual segments which indicate the remaining space on the drive. This edition is available with storage capacities of 160 GB, 250 GB, 320 GB, 400 GB, and 750 GB.
Models:
Pro Edition The Pro Edition My Books contain all of the features of the Premium Edition ones, but with added FireWire 800 connectivity for fast data transfer. In addition, the Pro Edition My Books replace the Western Digital backup software found on the Premium Editions with EMC Retrospect Express backup and recovery software.
Pro Edition My Book drives have the same basic case design as Premium Edition drives; however, the case is silver rather than black. In addition, it includes a circular blue capacity gauge LED divided into six segments (representing 17% of usage per segment) and an outer ring that represents drive activity.
The Pro Edition My Book is marketed as a RAID solution that can be used as a backup device.
Pro Edition II This edition was available with storage capacities of 500 GB, 640 GB, 1 TB, 1.5 TB, and 2 TB.
Premium Edition II This edition is black and was available with storage capacities of 500 GB, 640 GB, 1 TB, 1.5 TB, and 2 TB.
Models:
World Edition The World Edition My Books function as network-attached storage (NAS), by way of an Ethernet interface. They also feature an extra USB host port to allow an additional USB drive to be daisy chained. Data on first generation (Blue Rings) My Book World is accessed as CIFS/SMB shared folders. The second generation (White Lights) expands the access choices to include NFS, FTP, an iTunes server, and a Twonky media server.
Models:
In addition, the World Edition uses WD Anywhere Access to gain remote access to the drive via the Internet.
It has the same basic case design as the Premium Edition drives, including the capacity gauge, except the color of the World Edition is white. It has the same Morse code ventilation as the other editions.
Models:
Network speed Although My Book Ethernet-capable disks come with a Gigabit Ethernet interface, the network speed is significantly slower. Especially for older "blue rings" models (200 MHz ARM CPU and 32 MiB RAM), where it varies between 3–6 MB/s, with an average of 4.5 MB/s. The newer "white lights" My Book World Edition 1 TB and 2 TB storage capacity models, WDH1NC and WDH2NC (oxnas810, 380 MHz ARM CPU and 128 MiB RAM), have drive speeds comparable to USB, at about 10 MB/s write and 25 MB/s read.The "white lights" WDH1NC is jumbo frames capable and can achieve ~36 MB/s reading and ~18 MB/s writing speed over Gigabit Ethernet.
Models:
Internals This drive runs BusyBox on Linux on an Oxford Semiconductor 0XE800 ARM chip which has the ARM926EJ-S core. In addition it uses a VIA Cicada Simpliphy vt6122 Gigabit Ethernet chipset, and a Hynix 32 Mbit DDR Synchronous DRAM chip. The webserver is the mini_http server, although older "bluerings" use Lighttpd. The drives of the World Edition are xfs or ext3 formatted, which means that the drive can be mounted as a standard drive from within Linux if removed from the casing and installed in a normal PC.
Models:
With both sets of commands a utility such as Gparted can be used to determine which paths are relevant for a given setup.
Models:
Extending capabilities The device can be 'unlocked' and accessed via SSH terminal (newer versions of WDH1NC10000 do not need to be "unlocked": MBWE SSH Access), meaning that the WD MioNet Java-based software can be disabled so the device can be run with an unrestricted Linux OS, at the cost of voiding the warranty. The unlocking makes it possible to install other software on My Book. For example, the user can run a different web server or an ftp server (such as vsftpd) on it, use NFS for mounting shared directories natively from Unix, or install a bitTorrent client such as rTorrent.
Models:
Premium ES Edition My Book Premium ES Edition drives are nearly identical to their Premium Edition counterparts, the only difference being that the ES line features a single eSATA connection instead of the dual FireWire 400 ports present on the Premium Edition, allowing computers with available eSATA ports to transfer data at speeds of up to 3 Gbit/s. This edition is available in 320 GB and 500 GB capacities.
Models:
Mirror Edition This edition was available with storage capacities of 1 TB and 2 TB.
My Book for Mac This edition was available with storage capacities of 2 TB, 3 TB, 4 TB, 6 TB and 8 TB.
Essential This edition was available with storage capacities of 500 GB, 640 GB, 1 TB, 1.5 TB, and 2 TB.
Models:
Studio The My Book Studio Edition comes with quad interface: USB 2.0 / FireWire 400 / FireWire 800 and eSATA. It is marketed for use with Mac OS X. This edition is available with storage capacities of 1 TB, 1.5 TB and 2 TB.The current edition (as of November 2010) has two FireWire 800 ports and one USB 2.0 mini port. It comes pre-formatted as Mac OS X HFS+.
Models:
The My Book Studio Edition II contains two drives and is designed to be used as a RAID system for increased performance. This edition is available with storage capacities of 1 TB, 2 TB, 4 TB and 6 TB.The two drives can be replaced by the user.
Models:
Live In 2011, Western Digital released the My Book Live Edition NAS. They range in storage capacity from 1 to 3 TB. My Book Live uses Applied Micro APM82181 processor working at 800 MHz and has 256 MiB of RAM. Broadcom BCM54610 Ethernet interface is able to support 10/100/1000 Mbit/s connectivity. Contrary to previous versions, Live has no USB ports. Instead of a Linux-Kernel & Busybox found in previous versions, Live uses a full-featured Debian GNU/Linux.
Models:
Live Duo My Book Live Duo was released in January 2012. It features two drives (totaling 6 or 4 TB, depending on the product version) that can be configured in a RAID array; in that case, all data is automatically mirrored and can be recovered if one of the drives fail (but effective drive space is halved). It sports a similar design to the previous My Book Live, but unlike that one this product has a top cover that allows for easy servicing and replacement of the drives. It also has one Gigabit Ethernet and one USB connection.
Models:
AV DVR Expander The My Book AV DVR Expander is intended to increase the disk capacity of consumer DVRs or compatible camcorders. It can also be used connected to a computer, if necessary. The Expander is available with a storage capacity of 1 TB.
The DVR Expander was originally designed specifically for the TiVo series 3 and onwards and, at that time, the only connectivity was an eSATA port. Recent versions come with a USB 2.0 connection, as well, and are compatible with Direct TV, Dish TV, and the Pace, Time Warner and Scientific Atlanta brands of DVR.
Models:
My Cloud In 2013, My Cloud NAS has been released by Western Digital. My Cloud uses a Mindspeed Comcerto 2000 (M86261G-12) dual-core ARM Cortex-A9 Communication Processor running at 650 MHz. The Gigabit Ethernet port is a Broadcom BCM54612E Gigabit Ethernet Transceiver. Other components include 256 MiB of Samsung K4B2G1646E DDR3 RAM and 512 KB of Winbound 25X40CL flash. The drive is a WD Red 2 TB (WD20EFRX).
Models:
My Cloud relies on air convection for cooling, not a fan.
2021 data deletion:
On 24 June 2021, users reported that their My Book drives had apparently been wiped the day before in a factory reset, perhaps via malware.
Morse code:
The Morse code message written into the drive case is made up of a selection of the words "personal", "reliable", "innovative", "simple", and "design". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Atwater system**
Atwater system:
The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but few alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation.
Derivation:
Available energy (as used by Atwater) is equivalent to the modern usage of the term metabolisable energy (ME).
In most studies on humans, losses in secretions and gases are ignored. The gross energy (GE) of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components – protein (GEp), fat (GEf) and carbohydrate (GEcho) (by difference) in the proximate system.
Atwater considered the energy value of feces in the same way.
By measuring coefficients of availability or in modern terminology apparent digestibility, Atwater derived a system for calculating faecal energy losses.
where Dp, Df, and Dcho are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as for the constituent in question.
Derivation:
Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcal/g (33 kJ/g) urinary nitrogen and thus his equation for metabolisable energy became Gross energy values Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets whose composition in terms of foodstuffs is different from those eaten in the US in the early 20th century.
Derivation:
Apparent digestibility coefficients Atwater measured a large number of digestibility coefficients for simple mixtures, and in substitution experiments derived values for individual foods. These he combined in a weighted fashion to derive values for mixed diets. When these were tested experimentally with mixed diets they did not give a good prediction, and Atwater adjusted the coefficients for mixed diets.
Urinary correction The energy/nitrogen ratio in urine shows considerable variation and the energy/organic matter is less variable, but the energy/nitrogen value provided Atwater with a workable approach although this has caused some confusion and only applies for subjects in nitrogen balance.
Modified system:
Based on the work of Atwater, it became common practice to calculate energy content of foods using 4 kcal/g for carbohydrates and proteins and 9 kcal/g for lipids. The system was later improved by Annabel Merrill and Bernice Watt of the USDA, who derived a system whereby specific calorie conversion factors for different foods were proposed. This takes cognizance of the fact that first the gross energy values of the protein, fats and carbohydrates from different food sources are different, and second, that the apparent digestibility of the components of different foods is different.
Modified system:
This system relies on having measured heats of combustion of a wide range of isolated proteins, fats and carbohydrates. It also depends on data from digestibility studies, where individual foods have been substituted for basal diets in order to measure the apparent digestibility coefficients for those foods. This approach is based on the assumption that there are no interactions between foods in a mixture in the intestine, and from a practical view point, such studies with humans are difficult to control with the required accuracy.
Assumptions based on the use of carbohydrates by difference and the effects of dietary fibre:
The carbohydrate by difference approach presents several problems. First, it does not distinguish between sugars, starch and the unavailable carbohydrates (roughage, or "dietary fibre").
This affects first the gross energy that is assigned to carbohydrate—sucrose has a heat of combustion of 3.95 kcal/g (16.53 kJ/g) and starch 4.15 kcal/g (17.36 kJ/g).
Secondly it does not provide for the fact that sugars and starch are virtually completely digested and absorbed, and thus provide metabolisable energy equivalent to their heat of combustion.
Assumptions based on the use of carbohydrates by difference and the effects of dietary fibre:
The unavailable carbohydrates (dietary fibre) are degraded to a variable extent in the large bowel. The products of this microbial digestion are fatty acids, CO2 (carbon dioxide), methane and hydrogen. The fatty acids (acetate, butyrate and propionate) are absorbed in the large intestine and provide some metabolisable energy. The extent of degradation depends on the source of the dietary fibre (its composition and state of division), and the individual consuming the dietary fibre. There is insufficient data to give firm guidance on the energy available from this source.
Assumptions based on the use of carbohydrates by difference and the effects of dietary fibre:
Finally dietary fibre affects faecal losses of nitrogen and fat. Whether the increased fat loss is due to an effect on small intestinal absorption is not clear. The increased faecal nitrogen losses on high fibre diets are probably due to an increased bacterial nitrogen content of the faeces. Both these effects however lead to reductions in apparent digestibility, and therefore the Atwater system warrants small changes in the proper energy conversion factors for those diets.
Theoretical and practical considerations relating to the calculation of energy values:
Variations in heats of combustion of food constituents Proteins The experimental evidence for the magnitude of this variation is very limited, but as the heats of combustion of the individual amino-acids are different it is reasonable to expect variations between different proteins. An observed range of from 5.48 for conglutin (from blue lupin) to 5.92 for Hordein (barley) was reported, which compares with Atwaters' range of 5.27 for gelatin to 5.95 for wheat gluten. It is difficult to calculate expected values for a protein from amino-acid data, as some of the heats of combustion are not known accurately. Preliminary calculations on cow's milk suggest a value of around 5.5 kcal/g (23.0 kJ/g).
Theoretical and practical considerations relating to the calculation of energy values:
Fats Analogously the experimental evidence is limited, but since the fatty acids differ in their heats of combustion one should expect fats to vary in heats of combustion. These differences are, however, relatively small – for example, breast milk fat has a calculated heat of combustion of 9.37 kcal/g (39.2 kJ/g) compared with that of cow milk fat of 9.19 kcal/g (38.5 kJ/g).
Theoretical and practical considerations relating to the calculation of energy values:
Carbohydrates Monosaccharides have heats of combustion of around 3.75 kcal/g (15.7 kJ/g), disaccharides 3.95 kcal/g (16.5 kJ/g) and polysaccharides 4.15 to 4.20 kcal/g (17.4 to 17.6 kJ/g). The heat of hydrolysis is very small and these values are essentially equivalent when calculated on a monosaccharide basis. Thus 100 g sucrose gives on hydrolysis 105.6 g monosaccharide and 100 g starch gives on hydrolysis 110 g glucose.
Theoretical and practical considerations relating to the calculation of energy values:
Apparent digestibility coefficients The human digestive tract is a very efficient organ, and the faecal excretion of nitrogenous material and fats is a small proportion (usually less than 10%) of the intake. Atwater recognised that the faecal excretion was a complex mixture of unabsorbed intestinal secretions, bacterial material and metabolites, sloughed mucosal cells, mucus, and only to a small extent, unabsorbed dietary components. This might be one reason why he chose to use availability rather than digestibility. His view was that these faecal constituents were truly unavailable and that his apparent disregard of the nature of faecal excretion was justifiable in a practical context.
Theoretical and practical considerations relating to the calculation of energy values:
The ratio intake faecal excretion intake , wherever faecal excretion is small, will approximate to unity and thus these coefficients have a low variance and have the appearance of constants. This is spurious since faecal excretion is variable even on a constant diet, and there is no evidence to suggest that faecal excretion is in fact related to intake in the way implied by these coefficients.
Theoretical and practical considerations relating to the calculation of energy values:
Practical considerations in calculations of energy value of foods and diets The calculation of energy values must be regarded as an alternative to direct measurement, and therefore is likely to be associated with some inaccuracy when compared with direct assessment. These inaccuracies arise for a number of reasons Variations in food composition: Foods are biological mixtures and as such show considerable variation in composition, particularly in respect of water and fat content. This means that compositional values quoted for representative samples of foods in food composition tables do not necessarily apply to individual samples of foods. In studies where great accuracy is required, samples of the food consumed must be analysed.
Theoretical and practical considerations relating to the calculation of energy values:
Measurements of food intake: In estimating energy intakes, measurements of food intake are made, and these are known to be subject to considerable uncertainty. Even in studies under very close supervision the errors in weighing individual food items are rarely less than ±5%. A certain degree of pragmatism must therefore be used when assessing procedures for calculating energy intakes, and many authors impute greater accuracy to quoted calculated energy intakes than is justifiable.
Theoretical and practical considerations relating to the calculation of energy values:
Individual variation: Variations in individuals are seen in all human studies, and these variations are not allowed for in most calculations.The theoretical and physiological objections to the assumptions inherent in the Atwater system are likely to result in errors much smaller than these practical matters. Conversion factors were derived from experimental studies with young infants, but these produced values for metabolisable energy intake that were insignificantly different from those obtained by direct application of the modified Atwater factors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Time Protocol**
Time Protocol:
The Time Protocol is a network protocol in the Internet Protocol Suite defined in 1983 in RFC 868 by Jon Postel and K. Harrenstein. Its purpose is to provide a site-independent, machine readable date and time.
Time Protocol:
The Time Protocol may be implemented over the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). A host connects to a server that supports the Time Protocol on port 37. The server then sends the time as a 32-bit unsigned integer in binary format and in network byte order, representing the number of seconds since 00:00 (midnight) 1 January, 1900 GMT, and closes the connection. Operation over UDP requires the sending of any datagram to the server port, as there is no connection setup for UDP.
Time Protocol:
The fixed 32-bit data format means that the timestamp rolls over approximately every 136 years, with the first such occurrence on 7 February 2036. Programs that use the Time Protocol must be carefully designed to use context-dependent information to distinguish these dates from those in 1900.
Many Unix-like operating systems used the Time Protocol to monitor or synchronize their clocks using the rdate utility, but this function was superseded by the Network Time Protocol (NTP) and the corresponding ntpdate utility. NTP is more sophisticated in various ways, among them that its resolution is finer than one second.
Inetd implementation:
On most UNIX-like operating systems a Time Protocol server is built into the inetd (or xinetd) daemon. The service is usually not enabled by default. It may be enabled by adding the following lines to the file /etc/inetd.conf and reloading the configuration.
time stream tcp nowait root internal time dgram udp wait root internal | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Antiseptic**
Antiseptic:
An antiseptic (Greek: ἀντί, romanized: anti, lit. 'against' and σηπτικός, sēptikos, 'putrefactive') is an antimicrobial substance or compound that is applied to living tissue to reduce the possibility of sepsis, infection or putrefaction. Antiseptics are generally distinguished from antibiotics by the latter's ability to safely destroy bacteria within the body, and from disinfectants, which destroy microorganisms found on non-living objects.Antibacterials include antiseptics that have the proven ability to act against bacteria. Microbicides which destroy virus particles are called viricides or antivirals. Antifungals, also known as antimycotics, are pharmaceutical fungicides used to treat and prevent mycosis (fungal infection).
Surgery:
The widespread introduction of antiseptic surgical methods was initiated by the publishing of the paper Antiseptic Principle of the Practice of Surgery in 1867 by Joseph Lister, which was inspired by Louis Pasteur's germ theory of putrefaction. In this paper, Lister advocated the use of carbolic acid (phenol) as a method of ensuring that any germs present were killed. Some of this work was anticipated by: Ancient Greek physicians Galen (c. 130–200) and Hippocrates (c. 400 BC) as well as Sumerian clay tablets dating from 2150 BC that advocate the use of similar techniques.
Surgery:
Florence Nightingale, who contributed substantially to the report of the Royal Commission on the Health of the Army (1856–1857), based on her earlier work Ignaz Semmelweis, who published his work The Cause, Concept and Prophylaxis of Childbed Fever in 1861, summarizing experiments and observations since 1847 Medieval surgeons Hugh of Lucca, Theoderic of Servia, and his pupil Henri de Mondeville were opponents of Galen's opinion that pus was important to healing, which had led ancient and medieval surgeons to let pus remain in wounds. They advocated draining and cleaning the wound edges with wine, dressing the wound after suturing, if necessary and leaving the dressing on for ten days, soaking it in warm wine all the while, before changing it. Their theories were bitterly opposed by Galenist Guy de Chauliac and others trained in the classical tradition.
Surgery:
Oliver Wendell Holmes, Sr., who published The Contagiousness of Puerperal Fever in 1843
Some common antiseptics:
Antiseptics can be subdivided into about eight classes of materials. These classes can be subdivided according to their mechanism of action: small molecules that indiscriminately react with organic compounds and kill microorganisms (peroxides, iodine, phenols) and more complex molecules that disrupt the cell walls of the bacteria.
Alcohols, including ethanol and 2-propanol/isopropanol are sometimes referred to as surgical spirit. They are used to disinfect the skin before injections, among other uses.
Some common antiseptics:
Diguanides including chlorhexidine gluconate, a bacteriocidal antiseptic which (with an alcoholic solvent) is the most safe & effective antiseptic for reducing the risk of infection after clean surgery, including tourniquet-controlled upper limb surgery. It is also used in mouthwashes to treat inflammation of the gums (gingivitis). Polyhexanide (polyhexamethylene biguanide, PHMB) is an antimicrobial compound suitable for clinical use in critically colonized or infected acute and chronic wounds. The physicochemical action on the bacterial envelope prevents or impedes the development of resistant bacterial strains.
Some common antiseptics:
Iodine, especially in the form of povidone-iodine, is widely used because it is well tolerated; does not negatively affect wound healing; leaves a deposit of active iodine, thereby creating the so-called "remnant", or persistent effect; and has wide scope of antimicrobial activity. The traditional iodine antiseptic is an alcohol solution (called tincture of iodine) or as Lugol's iodine solution. Some studies do not recommend disinfecting minor wounds with iodine because of concern that it may induce scar tissue formation and increase healing time. However, concentrations of 1% iodine or less have not been shown to increase healing time and are not otherwise distinguishable from treatment with saline. Iodine will kill all principal pathogens and, given enough time, even spores, which are considered to be the most difficult form of microorganisms to be inactivated by disinfectants and antiseptics.
Some common antiseptics:
Octenidine dihydrochloride, currently increasingly used in continental Europe, often as a chlorhexidine substitute.
Peroxides, such as hydrogen peroxide and benzoyl peroxide. Commonly, 3% solutions of hydrogen peroxide have been used in household first aid for scrapes, etc. However, the strong oxidization causes scar formation and increases healing time during fetal development.
Phenols such as phenol itself (as introduced by Lister) and triclosan, hexachlorophene, chlorocresol, and chloroxylenol. The latter is used for skin disinfection and cleaning surgical instruments. It is also used within a number of household disinfectants and wound cleaners.
Quat salts such as benzalkonium chloride/Lidocaine (trade name Bactine among others), cetylpyridinium chloride, or cetrimide. These surfactants disrupt cell walls.
Quinolines such as hydroxyquinolone, dequalium chloride, or chlorquinaldol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photosynthesis**
Photosynthesis:
Photosynthesis ( FOH-tə-SINTH-ə-sis) is a biological process used by many cellular organisms to convert light energy into chemical energy, which is stored in organic compounds that can later be metabolized through cellular respiration to fuel the organism's activities. The term usually refers to oxygenic photosynthesis, where oxygen is produced as a byproduct, and some of the chemical energy produced is stored in carbohydrate molecules such as sugars, starch and cellulose, which are synthesized from endergonic reaction of carbon dioxide with water. Most plants, algae and cyanobacteria perform photosynthesis; such organisms are called photoautotrophs. Photosynthesis is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the biological energy necessary for complex life on Earth.Some bacteria also perform anoxygenic photosynthesis, which use bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, and sulfur is produced as a byproduct instead of oxygen. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and power proton pumps to directly synthesize adenosine triphosphate (ATP). Such archaeal photosynthesis might have been the earliest form of photosynthesis evolved on Earth, going back as far as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis).
Photosynthesis:
Although photosynthesis is performed differently by different species, the process always begins when energy from light is absorbed by proteins called reaction centers that contain photosynthetic pigments or chromophores. In plants, these proteins are chlorophyll (a porphyrin derivative that absorbs the red and blue spectrums of light, thus reflecting a green color) held inside organelles called chloroplasts, which are most abundant in leaf cells, while in bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two further compounds that serve as short-term stores of energy, enabling its transfer to drive other reactions: these compounds are reduced nicotinamide adenine dinucleotide phosphate (NADPH) and adenosine triphosphate (ATP), the "energy currency" of cells.
Photosynthesis:
In plants, algae and cyanobacteria, sugars are synthesized by a subsequent sequence of light-independent reactions called the Calvin cycle. In the Calvin cycle, atmospheric carbon dioxide is incorporated into already existing organic carbon compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms such as the reverse Krebs cycle are used to achieve the same end.
Photosynthesis:
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. Today, the average rate of energy capture by photosynthesis globally is approximately 130 terawatts, which is about eight times the current power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or billion metric tons), of carbon into biomass per year. That plants receive some energy from light – in addition to air, soil, and water – was first discovered in 1779 by Jan Ingenhousz.
Photosynthesis:
Photosynthesis is vital for climate processes, as it captures carbon dioxide from the air and then binds carbon in plants and further in soils and harvested products. Cereals alone are estimated to bind 3,825 Tg (teragrams) or 3.825 Pg (petagrams) of carbon dioxide every year, i.e. 3.825 billion metric tons.
Overview:
Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon. In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen.Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism. Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments.The general equation for photosynthesis as first proposed by Cornelis van Niel is: CO2carbondioxide + 2H2Aelectron donor + photonslight energy → [CH2O]carbohydrate + 2Aoxidizedelectrondonor + H2OwaterSince water is used as the electron donor in oxygenic photosynthesis, the equation for this process is: CO2carbondioxide + 2H2Owater + photonslight energy → [CH2O]carbohydrate + O2oxygen + H2OwaterThis equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation: CO2carbondioxide + H2O water + photonslight energy → [CH2O]carbohydrate + O2 oxygen Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is: CO2carbondioxide + (AsO3−3)arsenite + photonslight energy → (AsO3−4)arsenate + COcarbonmonoxide(used to build other compounds in subsequent reactions)Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.
Overview:
Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation.Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis.
Photosynthetic membranes and organelles:
In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb.In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system.
Photosynthetic membranes and organelles:
Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors.
Photosynthetic membranes and organelles:
These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex.Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place.
Light-dependent reactions:
In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen.
Light-dependent reactions:
The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is: Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms.
Light-dependent reactions:
Z scheme In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic.
Light-dependent reactions:
In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram at right). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with a H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends.
Light-dependent reactions:
The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction.
Light-dependent reactions:
Water photolysis Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680+. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms.
Light-independent reactions:
Calvin cycle In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is: 128 Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars produced by photosynthesis are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain.
Light-independent reactions:
The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (5 out of 6 molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids.
Light-independent reactions:
Carbon concentrating mechanisms On land In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, CO2 will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase and decrease in carbon fixation. Some plants have evolved mechanisms to increase the CO2 concentration in the leaves under these conditions.Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where CO2 released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases CO2 fixation and, thus, the photosynthetic capacity of the leaf. C4 plants can produce more sugar than C3 plants in conditions of high light and temperature. Many important crop plants are C4 plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use C3 carbon fixation, compared to 3% that use C4 carbon fixation; however, the evolution of C4 in over 60 plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to C4 and a useful CCM in its own right.Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to C4 metabolism, which spatially separates the CO2 fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from C3 plants, and fix the CO2 at night, when their stomata are open. CAM plants store the CO2 mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases CO2 inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants.Calcium oxalate accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named Alarm photosynthesis. Under stress conditions (e.g. water deficit) oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere.
Light-independent reactions:
In water Cyanobacteria possess carboxysomes, which increase the concentration of CO2 around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome releases CO2 from dissolved hydrocarbonate ions (HCO−3). Before the CO2 diffuses out it is quickly sponged up by RuBisCO, which is concentrated within the carboxysomes. HCO−3 ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO−3 ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate CO2 around RuBisCO.
Order and kinetics:
The overall process of photosynthesis takes place in four stages:
Efficiency:
Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%. Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) re-emitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers.Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices.
Efficiency:
Scientists are studying photosynthesis in hopes of developing plants with increased yield.The efficiency of both light and dark reactions can be measured but the relationship between the two can be complex. For example, the ATP and NADPH energy molecules, created by the light reaction, can be used for carbon fixation or for photorespiration in C3 plants. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions.Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. It is also possible to investigate both at the same time using an integrated chlorophyll fluorometer and gas exchange system, or by using two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2, and of ΔH2O using reliable methods CO2 is commonly measured in μmols/(m2/s), parts per million or volume per million and H2O is commonly measured in mmol/(m2/s) or in mbars. By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation or PAR, it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and Ci or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations.Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response.Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC to replace Ci. The estimation of CO2 at the site of carboxylation in the chloroplast, or CC, becomes possible with the measurement of mesophyll conductance or gm using an integrated system.Photosynthesis measurement systems are not designed to directly measure the amount of light absorbed by the leaf. But analysis of chlorophyll-fluorescence, P700- and P515-absorbance and gas exchange measurements reveal detailed information about e.g. the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength-dependency of the photosynthetic efficiency can be analyzed.A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure named a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time.
Efficiency:
Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks.
Evolution:
Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time.Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago.The main source of oxygen in the Earth's atmosphere derives from oxygenic photosynthesis, and its appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around 2 billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center.
Evolution:
Symbiosis and the origin of chloroplasts Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges and sea anemones. It is presumed that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks Elysia viridis and Elysia chlorotica also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins that they need to survive.An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles.
Evolution:
Photosynthetic eukaryotic lineages Symbiotic and kleptoplastic organisms excluded: The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular) The cryptophytes—clade Cryptista (unicellular) The haptophytes—clade Haptista (unicellular) The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular) The ochrophytes—clade Stramenopila (uni- and multicellular) The chlorarachniophytes and 3 species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular) The euglenids—clade Excavata (unicellular)Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga).
Evolution:
Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae.
While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees.
Evolution:
Cyanobacteria and the evolution of photosynthesis The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae), which are the only prokaryotes performing oxygenic photosynthesis. The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae.
Experimental history:
Discovery Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century.
Experimental history:
Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil used by a plant and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself.
Experimental history:
Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that the air that had been "injured" by the candle and the mouse could be restored by a plant.In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours.In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which photosynthesis is used to produce food (such as glucose) was outlined.
Experimental history:
Refinements Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria he was the first to demonstrate that photosynthesis is a light-dependent redox reaction, in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide.
Experimental history:
Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry.Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction: 2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water.
Experimental history:
Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle.
Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain.
Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration.In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
Experimental history:
In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series.
Experimental history:
Development of the concept In 1893, Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term.
Experimental history:
C3 : C4 photosynthesis research In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight.Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants.
Factors:
There are four main factors influencing photosynthesis and several corollary factors. The four main are: Light irradiance and wavelength Water absorption Carbon dioxide concentration Temperature.Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Factors:
Light intensity (irradiance), wavelength and temperature The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life.The radiation climate within plant communities is extremely variable, in both time and space.
In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation.
At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance.
Factors:
At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased.These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center.
Factors:
Carbon dioxide levels and photorespiration As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars.
Factors:
RuBisCO oxygenase activity is disadvantageous to plants for several reasons: One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle.
Factors:
Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis.
Factors:
Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen.A highly simplified summary is:2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shear rate**
Shear rate:
In physics, shear rate is the rate at which a progressive shearing deformation is applied to some material.
Simple shear:
The shear rate for a fluid flowing between two parallel plates, one moving at a constant speed and the other one stationary (Couette flow), is defined by γ˙=vh, where: γ˙ is the shear rate, measured in reciprocal seconds; v is the velocity of the moving plate, measured in meters per second; h is the distance between the two parallel plates, measured in meters.Or: γ˙ij=∂vi∂xj+∂vj∂xi.
Simple shear:
For the simple shear case, it is just a gradient of velocity in a flowing material. The SI unit of measurement for shear rate is s−1, expressed as "reciprocal seconds" or "inverse seconds". However, when modelling fluids in 3D, it is common to consider a scalar value for the shear rate by calculating the second invariant of the strain-rate tensor γ˙=2ε:ε .The shear rate at the inner wall of a Newtonian fluid flowing within a pipe is γ˙=8vd, where: γ˙ is the shear rate, measured in reciprocal seconds; v is the linear fluid velocity; d is the inside diameter of the pipe.The linear fluid velocity v is related to the volumetric flow rate Q by v=QA, where A is the cross-sectional area of the pipe, which for an inside pipe radius of r is given by A=πr2, thus producing v=Qπr2.
Simple shear:
Substituting the above into the earlier equation for the shear rate of a Newtonian fluid flowing within a pipe, and noting (in the denominator) that d = 2r: γ˙=8vd=8(Qπr2)2r, which simplifies to the following equivalent form for wall shear rate in terms of volumetric flow rate Q and inner pipe radius r: γ˙=4Qπr3.
For a Newtonian fluid wall, shear stress (τw) can be related to shear rate by τw=γ˙xμ where μ is the dynamic viscosity of the fluid. For non-Newtonian fluids, there are different constitutive laws depending on the fluid, which relates the stress tensor to the shear rate tensor. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Calkin algebra**
Calkin algebra:
In functional analysis, the Calkin algebra, named after John Williams Calkin, is the quotient of B(H), the ring of bounded linear operators on a separable infinite-dimensional Hilbert space H, by the ideal K(H) of compact operators. Here the addition in B(H) is addition of operators and the multiplication in B(H) is composition of operators; it is easy to verify that these operations make B(H) into a ring. When scalar multiplication is also included, B(H) becomes in fact an algebra over the same field over which H is a Hilbert space.
Properties:
Since K(H) is a maximal norm-closed ideal in B(H), the Calkin algebra is simple. In fact, K(H) is the only closed ideal in B(H).As a quotient of a C*-algebra by a two-sided ideal, the Calkin algebra is a C*-algebra itself and there is a short exact sequence 0→K(H)→B(H)→B(H)/K(H)→0 which induces a six-term cyclic exact sequence in K-theory. Those operators in B(H) which are mapped to an invertible element of the Calkin algebra are called Fredholm operators, and their index can be described both using K-theory and directly. One can conclude, for instance, that the collection of unitary operators in the Calkin algebra consists of homotopy classes indexed by the integers Z. This is in contrast to B(H), where the unitary operators are path connected.As a C*-algebra, the Calkin algebra is not isomorphic to an algebra of operators on a separable Hilbert space. The Gelfand-Naimark-Segal construction implies that the Calkin algebra is isomorphic to an algebra of operators on a nonseparable Hilbert space, but while for many other C*-algebras there are explicit descriptions of such Hilbert spaces, the Calkin algebra does not have an explicit representation.The existence of an outer automorphism of the Calkin algebra is shown to be independent of ZFC, by work of Phillips and Weaver, and Farah.
Generalizations:
One can define a Calkin algebra for any infinite-dimensional complex Hilbert space, not just separable ones.An analogous construction can be made by replacing H with a Banach space, which is also called a Calkin algebra.The Calkin algebra is the Corona algebra of the algebra of compact operators on a Hilbert space. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**JAM Message Base Format**
JAM Message Base Format:
The JAM Message Base Format was one of the most popular file formats of message bases on DOS-based BBSes in the 1990s. JAM stands for "Joaquim-Andrew-Mats" after the original authors of the API, Joaquim Homrighausen, Andrew Milner, Mats Birch, and Mats Wallin. Joaquim was the author of FrontDoor, a DOS-based FidoNet-compatible mailer. Andrew was the author of RemoteAccess, a popular DOS-based Bulletin Board System. JAM was originally released in 1993 in C, however the most popular implementation was Mark May's "MK Source for Msg Access" written in Pascal which also saw its initial release in 1993.
BBS software:
EleBBS Ezycom LoraBBS MBSE Mystic BBS Nexus BBS RemoteAccess ProBoard TAG (BBS) TCRA32 Telegard Tornado BBS
Mail import/export software:
AllFix - File Tosser (can read control messages from and post messages into a JAM messagebase) Altair - FTN tosser Crashmail II - A portable FidoNet tosser for JAM messagebases FastEcho - FTN tosser FMail - FTN tosser GEcho - FTN tosser HPT (Fidonet) - FTN tosser IMail - FTN tosser Mystic BBS - BBS software with built in JAM import/export Partoss (Parma tosser) - FTN tosser Regina-Tosser/2 TosScan - FTN tosser WaterGate xMail 1.00 - FTN tosser
Mail reading/editing software:
FrontDoor FM - Sysop's local access reader/editor from FrontDoor package FrontDoor APX - Integrated reader/editor from FrontDoor APX package GoldED - Sysop's local access reader/editor Hector/DOS RAVIP ReadMsg - BBS door that replaces builtin message base option TheReader v4.50 - BBS door that replaces builtin message base option TimED - Sysop's local access reader/editor WebJammer
Offline QWK/Bluewave software:
Bluewave Jc-QWK OffLine Message System (OLMS2000)
Mail posting tools:
(this software posts ASCII text files to JAM bases as messages) ChargePost JPost MPost PostIt - posts text files to local, netmail, and echomail areas RemoteAccess Automated Message System (RAMS) - posts welcome, thanks for the upload, and similar automated messages to users WriteJAM
Statistics tools:
(this software gathers statistical information) JAMStat - statistics bulletin generator MyMail QRatio ReadDetect Traffic v1.10
Maintenance tools:
Automatic Maintenance Pro CVTMSG10 Ftrack and RNtrack - netmail tracker (netmail manager) Itrack - netmail tracker (netmail manager) MK Message Utilities - convert between JAM and other message base formats MNTrack - netmail tracker (netmail manager) NetMgr 1.00 - netmail manager The NetMail Importer (NetImp) Y2Ktool - Fido Year 2000 Tools Rel. 6
Mail tools and utility software:
(this software fills some other utilitarian need not covered in another category listing) (some of this software is listed here because it hasn't been categorized) AMC Fidonet Awk Utility FMACopy MailBox 1.05 MessageBase Reporter MSGRA MSGRead 2.20 OM and OMlite RACD SHUT UP AND RUN THE MAIL VPJAM XSH
Other JAM capable software:
JamNNTPd - Jam based NNTP server, uses the JAM message format Message Base Spy - message base research, troubleshooting and development tool | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cambridge English Teaching Framework**
Cambridge English Teaching Framework:
The Cambridge English Teaching Framework is a professional development framework, designed by Cambridge English Language Assessment, which is used by English language teachers to self-assess and plan their own development.
Cambridge English Teaching Framework:
The framework describes four stages of a teacher's development (Foundation, Developing, Proficient and Expert) across five categories of teacher knowledge and skills: Learning and the Learner Teaching, Learning and Assessment Language Ability Language Knowledge and Awareness Professional Development and Values.Each category describes the key competencies for effective teaching at each stage of a teacher's development. The five categories are then divided into a number of components so that teachers can identify specific needs.English language teachers use the framework to self-assess where they are in their career, decide where they want to go next, think about the knowledge and skills they would like to develop and identify the courses, qualifications and resources which will help them to progress.
History:
The Cambridge English Teaching Framework was designed to encapsulate the key knowledge and skills needed for effective teaching at different levels and in different contexts, and to show how Cambridge English Teaching Courses, Qualifications and professional development resources map to this core syllabus of competencies.
History:
The framework was developed by experts at Cambridge English Language Assessment and validated by primary and secondary schools, private language schools and higher education providers around the world. During this validation stage, a group of teacher trainers were asked to match the framework competency statements to the different stages, in order to check whether the statements in the framework were placed at the correct stages. In addition, teachers around the world were asked to complete a questionnaire version of the framework. The teachers and their managers were then asked their opinion as to whether the framework had provided a fair assessment.The first version of the framework was launched in April 2014 at the International Association of Teachers of English as a Foreign Language (IATEFL) conference. This first version had four categories, with Language Ability originally covered in the ‘Language Knowledge and Awareness’ category. However, ongoing research showed that English language ability needed to be given greater prominence, so a Language Ability category was added in the second version of the framework, launched in September 2014.
Format:
The Cambridge English Teaching Framework is a profiling grid, rather than a performance assessment tool. It is designed to show stages of a teacher's development at different points in time, rather than provide a profile of ‘a good teacher’. This approach recognises that teachers’ development is not only defined by their years of experience, but that most teachers’ development will be ‘jagged’. At any one time, teachers will be at different stages across each of the categories of teaching knowledge and skills.On the horizontal axis of the profiling grid are four stages of teaching competence (Foundation, Developing, Proficient and Expert) and on the vertical axis of the grid are five categories of teaching knowledge and skills: 1. Learning and the Learner This category looks at a teacher's understanding of key language learning theories and concepts, their awareness of different learning styles, and their ability to apply this understanding to plan and facilitate language learning.2. Teaching, Learning and Assessment This category looks at a teacher's ability to plan and manage language learning, make effective use of learning resources, understand teaching language systems and skills, and assess learning.3. Language Ability This category looks at a teacher's own language ability, their understanding of the language points taught at different levels of the Common European Framework of Reference for Languages (CEFR), and their ability to use language accurately and appropriately when interacting with learners and other teachers.4. Language Knowledge and Awareness This category looks at a teacher's understanding of key terms and concepts used to describe language, their use of strategies to check and develop their language awareness, and their ability to apply such knowledge practically in order to facilitate language learning.5. Professional Development and Values This category looks at understanding and practice in the areas of teacher learning, classroom observation, professional development and critical reflection.Each category describes the key competencies for effective teaching at each stage of a teacher's development, as shown in the summary framework below. This summary version is for illustrative purposes only. Teachers should refer to the full competency statements or the Cambridge English Teacher Development Tracker when assessing their own development.
Support:
A free online tool is available for teachers to establish their current competency stage and identify their continuing professional development needs. The Cambridge English Teacher Development Tracker guides teachers through the framework categories with simple questions and a range of possible answers. Teachers can add other people (e.g. their managers or trainers) as reviewers. Reviewers can use the Tracker to compare the competency profiles of a number of teachers at the same time (e.g. to understand the skills profile of their whole team).
Teaching courses, qualifications and resources:
The Cambridge English Teaching Framework provides teachers with an overarching view of which Cambridge English Teaching Courses, Qualifications and resources correspond to each stage of a teacher's development.
Most of the teaching qualifications and courses above straddle more than one stage of teaching competence on the framework. For example, some CELTA candidates start the course with some previous classroom experience and may already be at the Developing stage, whereas other candidates start the course with no previous teaching experience and will therefore be at the Foundation stage.
Candidates ending a course and obtaining a qualification may also straddle different stages. For example, TKT modules are awarded a band from 1 to 4. Candidates who achieve a band 4 result are more likely to be at the Developing stage than those who achieve a band 1 result.
Teaching courses, qualifications and resources:
In addition to these teaching qualifications, Cambridge University Press English Language Teaching (ELT) was consulted during the development of the framework, and their teaching methodology books and materials are being mapped to the framework to aid teachers in their development. The content on the online professional membership, Cambridge English Teacher, is also mapped to the stages and categories of the framework, so that teachers who know where they are on the teaching framework can easily find relevant courses and resources for each framework stage and category.
Usage:
English language teachers use the framework to self-assess and plan their own development. In addition, Directors of Studies and Heads of Departments may choose to use the framework as the basis for professional development discussions with their staff and to set professional development goals for them.The framework is also being used by teacher training organisations, such as the Norwich Institute for Language Education (NILE), to develop and align their professional development courses for teachers.Once teachers have identified where they are on the framework, they can access recommendations for development activities and free development resources via the Cambridge English website. There are recommendations for every stage of each category, providing teachers with suggested reading, videos and actions to incorporate into their teaching practice, along with professional development courses and qualifications.
Research and development methodology:
The first stage in the development of the framework was a review of existing CPD frameworks in the field, including: Frameworks used in language education BALEAP Competency Framework for Teachers of English for Academic Purposes, UK (2008) British Council CPD Framework for Teachers of English, UK (2011) CAELA Framework for Professional Development, Center for Applied Linguistics, USA (2010) National Board for Professional Teaching Standards (NBPTS), USA (2010) 2nd ed.
Research and development methodology:
The European Association for Quality Services (EAQUALS) Profiling Grid (2013)Frameworks used in general education Australian Professional Standards for Teachers (APST), Australian Institute for Teaching and Schools Leadership, Australia (2011) Competency Framework for Teachers (CFT), Departments of Education and Skills, Western Australia (2004) Framework for Teaching, Association for Supervision and Curriculum Development, USA (2008) Professional Standards for Teachers (PST), Department for Education, UK (2013).The development of the Cambridge English Teaching Framework was also informed by Cambridge English Language Assessment's experience of developing teaching qualifications such as CELTA, Delta, ICELT and TKT. Data from teacher assessments, carried out as part of those qualifications, provided information about classroom practice and the processes that teachers use when planning and reflecting on their teaching, at different stages of their careers and in different contexts around the world. This evidence was used in designing the teaching framework, alongside an expert review of the CELTA, Delta and ICELT syllabuses and teaching education literature. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neumann series**
Neumann series:
A Neumann series is a mathematical series of the form ∑k=0∞Tk where T is an operator and := Tk−1∘T its k times repeated application. This generalizes the geometric series.
The series is named after the mathematician Carl Neumann, who used it in 1877 in the context of potential theory. The Neumann series is used in functional analysis. It forms the basis of the Liouville-Neumann series, which is used to solve Fredholm integral equations. It is also important when studying the spectrum of bounded operators.
Properties:
Suppose that T is a bounded linear operator on the normed vector space X . If the Neumann series converges in the operator norm, then Id −T is invertible and its inverse is the series: (Id−T)−1=∑k=0∞Tk ,where Id is the identity operator in X . To see why, consider the partial sums := ∑k=0nTk .Then we have lim lim lim n→∞(Id−Tn+1)=Id.
Properties:
This result on operators is analogous to geometric series in R , in which we find that: (1−x)⋅(1+x+x2+⋯+xn−1+xn)=1−xn+1, 1+x+x2+⋯=11−x.
One case in which convergence is guaranteed is when X is a Banach space and |T|<1 in the operator norm or ∑|Tn| is convergent. However, there are also results which give weaker conditions under which the series converges.
Example:
Let C∈R3×3 be given by: 10 350).
We need to show that C is smaller than unity in some norm. Therefore, we calculate: max max 10 10 1.
Thus, we know from the statement above that (I−C)−1 exists.
Approximate matrix inversion:
A truncated Neumann series can be used for approximate matrix inversion. To approximate the inverse of an invertible matrix A , we can assign the linear operator as: T(x)=(I−A)x where I is the identity matrix. If the norm condition on T is satisfied, then truncating the series at n , we get: A−1≈∑i=0n(I−A)i
The set of invertible operators is open:
A corollary is that the set of invertible operators between two Banach spaces B and B′ is open in the topology induced by the operator norm. Indeed, let S:B→B′ be an invertible operator and let T:B→B′ be another operator. If |S−T|<|S−1|−1 , then T is also invertible. Since |Id−S−1T|<1 , the Neumann series ∑(Id−S−1T)k is convergent. Therefore, we have T−1S=(Id−(Id−S−1T))−1=∑k=0∞(Id−S−1T)k Taking the norms, we get |T−1S|≤11−|Id−(S−1T)| The norm of T−1 can be bounded by where q=|S−T||S−1|.
Applications:
The Neumann series has been used for linear data detection in massive multiuser multiple-input multiple-output (MIMO) wireless systems. Using a truncated Neumann series avoids computation of an explicit matrix inverse, which reduces the complexity of linear data detection from cubic to square.Another application is the theory of Propagation graphs which takes advantage of Neumann series to derive closed form expression for the transfer function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inshes Formation**
Inshes Formation:
The Inshes Formation is a geologic formation in Scotland. It preserves fossils dating back to the Devonian period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Comparison of lightweight web browsers**
Comparison of lightweight web browsers:
A lightweight web browser is a web browser that sacrifices some of the features of a mainstream web browser in order to reduce the consumption of system resources, and especially to minimize the memory footprint.The tables below compare notable lightweight web browsers. Several of them use a common layout engine, but each has a unique combination of features and a potential niche. The minimal user interface in surf, for example, does not have tabs, whereas xombrero can be driven with vi-like keyboard commands.Four of the browsers compared—Lynx, w3m, Links, and ELinks—are designed for text mode, and can function in a terminal emulator. Eww is limited to working within Emacs. Links 2 has both a text-based user interface and a graphical user interface. w3m is, in addition to being a web browser, also a terminal pager.
Features:
Test scores reflect the version of the browser engine in use. Generally, a lower score indicates an older version of the browser engine.
Notes | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metabolic myopathy**
Metabolic myopathy:
Metabolic myopathies are myopathies that result from defects in biochemical metabolism that primarily affect muscle. They are generally genetic defects that interfere with muscle's ability to create energy, causing a low ATP reservoir within the muscle cell.
Metabolic myopathy:
At the cellular level, metabolic myopathies lack some kind of enzyme or transport protein that prevents the chemical reactions necessary to create adenosine triphosphate (ATP). ATP is often referred to as the "molecular unit of currency" of intracellular energy transfer. The lack of ATP prevents the muscle cells from being able to function properly. Some people with a metabolic myopathy never develop symptoms due to the body's ability to produce enough ATP through alternative pathways (e.g. the majority of those with AMP-deaminase deficiency are asymptomatic).
Metabolic myopathy:
H2O + ATP → H+ + ADP + Pi + energy → muscle contractionATP is needed for muscle contraction by two processes: Firstly, ATP is needed for transport proteins to actively transport calcium ions into the sarcoplasmic reticulum (SR) of the muscle cell between muscle contractions. Afterwards, when a nerve signal is received, calcium channels in the SR open briefly and calcium rushes into the cytosol by selective diffusion (which does not use ATP) in what is called a "calcium spark." The diffusion of calcium ions into the cytosol causes the myosin strands of the myofibril to become exposed, and the myosin strands pull the actin microfilaments together. The muscle begins to contract.
Metabolic myopathy:
Secondly, ATP is needed to allow the myosin to release and pull again, so that the muscle can contract further in what is known as the sliding filament model.ATP is consumed at a high rate by contracting muscles. The need for ATP in muscle cells is illustrated by the phenomenon of Rigor mortis, which is the muscle rigidity that occurs in dead bodies for a short time after death. In these muscles, all the ATP has been converted to ADP, and in the absence of further ATP being generated, the calcium transport proteins stop pumping calcium ions into the sarcoplasmic reticulum and the calcium ions gradually leak out. This causes the myosin proteins to grab the actin and pull once but without further supply of ATP, cannot release and pull again. The muscles therefore remain rigid in the position at death until the binding of myosin to actin begins to break down and they become loose again.
Symptoms:
In the event more ATP is needed from the affected pathway, the lack of it becomes an issue and symptoms develop. People with a metabolic myopathy often experience symptoms such as: exercise intolerance, muscle fatigue, pain and cramping during and/or after exercise, heavy breathing, shortness of breath (dyspnea), or rapid breathing (tachypnea), inappropriate rapid heart rate in response to exercise (tachycardia), exaggerated cardiorespiratory (breath and heart rate combined) response to exercise (dyspnea/tachypnea and tachycardia), exercise-induced myogenic hyperuricemia (exercise-induced accelerated breakdown of purine nucleotides in muscle via adenylate kinase reaction and purine nucleotide cycle), transient muscle contracture or pseudomyotonia (like a really bad cramp that can last for hours, which is myogenic and EMG silent), progressive muscle weakness, may have a pseudoathletic appearance (hypertrophy or pseudohypertrophy) especially of the calves, myoglobinuria and considerable breakdown of muscle tissue (rhabdomyolysis).The degree of symptoms varies greatly from person to person and is dependent on the severity of enzymatic or transport protein defect. In extreme cases it can lead to rhabdomyolysis. The symptoms experienced also depend on which metabolic pathway is impaired, as different metabolic pathways produce ATP at different time periods during activity and rest, as well as the type of activity (anerobic or aerobic) and its intensity (level of ATP consumption).
Symptoms:
A majority of patients with metabolic myopathies have dynamic rather than static findings, typically experiencing exercise intolerance, muscle pain, and cramps with exercise rather than fixed muscle weakness. However, a minority of metabolic myopathies have fixed muscular weakness rather than exercise intolerance, imitating an inflammatory myopathy or limb girdle muscular dystrophy. It is uncommon that both static and dynamic signs predominate.
Types:
Metabolic myopathies are generally caused by an inherited genetic mutation, an inborn error of metabolism. (In livestock, an acquired environmental GSD is caused by intoxication with the alkaloid castanospermine.) Metabolic myopathies cause the underproduction of adenosine triphosphate (ATP) within the muscle cell.The genetic mutation typically has an autosomal recessive inheritance pattern making it fairly rare to inherit, and even more rarely it can be caused by a random de novo genetic mutation or be autosomal dominant. Metabolic myopathies are categorized by the metabolic pathway to which the deficient enzyme or transport protein belongs. The main categories of metabolic myopathies are listed below: Muscle glycogen storage diseases (Muscle GSDs) and other inborn errors of carbohydrate metabolism that affect muscle—defect in sugar (carbohydrate) metabolism. The deficiency occurs in the cytosol of the muscle cell.
Types:
Fatty acid metabolism disorder (fatty acid oxidation disorder, FAOD)—defect in fat (lipid) metabolism, anywhere along the pathway, starting from entering the muscle cell and ending at converting fatty acids into acetyl-CoA within the mitochondrion. The deficiency occurs in the cell membrane, cytosol, mitochondrial membrane, or within the mitochondrion of the muscle cell.
Nucleotide metabolism disorder—defect in purine nucleotide cycle enzyme (such as AMP deaminase deficiency). Purine nucleotide metabolism is a part of protein catabolism, and the purine nucleotide cycle occurs within the cytosol of the muscle cell.
Mitochondrial myopathy—defect in mitochondrial enzymes or transport proteins for oxidative phosphorylation (including citric acid cycle and electron transport chain), excluding those for fatty acid oxidation. Occurs in the mitochondrial membrane or within the mitochondrion of the muscle cell.
Diagnosis:
The symptoms of a metabolic myopathy can be easily confused with the symptoms of another disease. As genetic sequencing research progresses, a non-invasive neuromuscular panel DNA test can help make a diagnosis. If the DNA test is inconclusive (negative or VUS), then a muscle biopsy is necessary for an accurate diagnosis.
Diagnosis:
A blood test for creatine kinase (CK) can be done under normal circumstances to test for signs of tissue breakdown, or with an added cardio portion that can indicate if muscle breakdown is occurring. An electromyography (EMG) test is sometimes taken in order to rule out other disorders if the cause of fatigue is unknown.An exercise stress test can be used to determine an inappropriate rapid heart rate (sinus tachycardia) response to exercise, which is seen in GSD-V, other glycogenoses, and mitochondrial myopathies. A 12 Minutes Walk Test (12MWT) can also be used to determine "second wind" which is also seen in McArdle Disease (GSD-V).A cardiopulmonary exercise test can measure both heart rate and breathing, to evaluate the oxygen cost (∆V’O2/∆Work-Rate) during incremental exercise. In both glycogenoses and mitochondrial myopathies, patients displayed an increased oxygen cost during exercise compared to control subjects; and therefore, can perform less work for a given VO2 consumption during submaximal daily life exercises.In fatty acid oxidation disorders (FAOD), while at rest, some exhibit cardiac arrhythmia (commonly various forms of tachycardia, but more rarely, conduction disorders or acute bradycardia); while others have a normal heart rhythm.Some GSDs and a mitochondrial myopathy are known to have a pseudoathletic appearance. McArdle disease (GSD-V) and late-onset Pompe disease (GSD-II) are known to have hypertrophy, particularly of the calf muscles. Cori/Forbes disease (GSD-III) is known to have hypertrophy of the sternocleidomastoid, trapezius and quadriceps muscles. Muscular dystrophy, limb-girdle, type 1H (which as of 2017 was excluded from LGMD for showing signs on muscle biopsy as being a mitochondrial myopathy, but not yet assigned new nomenclature) is also known to have hypertrophy of the calf muscles.Differentiating between different types of metabolic myopathies can be difficult due to the similar symptoms of each type such as myoglobinuria and exercise intolerance. It has to be determined whether the patient has fixed (static) or exercise-induced (dynamic) manifestations; and if exercise-related, what kind of exercise, before extensive exercise-related lab testing is done to determine the underlying cause.Adequate knowledge is required of the body's bioenergetic systems, including: which circumstances constitute anaerobic exercise (blood flow restricted by contracted muscles, insufficient oxygen and blood borne fuels, particularly isometric exercise, as well as sudden increased intensity) versus aerobic exercise (blood flow unrestricted), anaerobic metabolism (phosphagen system and anaerobic glycolysis - ATP produced without oxygen, regardless of adequate blood flow or not, quickly produces ATP which is useful in high-intensity activity and the beginning of any activity) versus aerobic metabolism (oxidative phosphorylation - ATP produced with oxygen, adequate blood flow required, slow to produce ATP but produces for longer and high yield), the different sources of ATP (phosphagen system, carbohydrate metabolism, lipid metabolism [including ketosis], protein metabolism [including the purine nucleotide cycle], oxidative phosphorylation), how long does each source take to start producing ATP, how long does each source continue to produce ATP, how long does each source take to replenish, how much ATP can each source generate, and which fuel source is primarily used given the intensity of the activity.For example, leisurely-paced walking and fast-paced walking on level ground (no incline) are both aerobic, but fast-paced walking relies on more muscle glycogen because of the higher intensity (which would cause exercise intolerance symptoms in those with muscle glycogenoses that hadn't yet achieved "second wind").When walking at a leisurely pace on level ground (no incline), but there is loose gravel or sand, long grass, snow, mud, or walking into a headwind, that added resistance (requiring more effort) makes the activity more reliant on muscle glycogen also. These and other surfaces, such as ice, can make you tense your muscles (which is anearobic requiring muscle glycogen) as you protect yourself from slipping or falling.Those with muscle glycogenoses can maintain a healthy life of exercise by learning activity adaptations, utilizing the bioenergetic systems that are available to them. Depending on the type of activity and whether they are in second wind, they slow their pace or rest briefly when need be, to make sure not to empty their "ATP Reservoir."
Treatment:
Metabolic myopathies have varying levels of symptoms, being most severe when developed during infancy. Those who do not develop a form of a metabolic myopathy until they are in their young adult or adult life tend to have more treatable symptoms that can be helped with a change in diet and exercise. It might be more accurate to say that metabolic myopathies described as adult-onset, it isn't necessarily that they didn't develop in infancy (they are inborn—from birth—errors of metabolism) but that they didn't display severe enough symptoms to warrant the attention of medical professionals until their adult years (severe symptoms such as rhabdomyolysis, fixed muscle weakness due to years of repetitive injury, or the de-conditioning of muscles from a more sedentary adult lifestyle which exacerbated symptoms).
Treatment:
Due to the rare nature of these diseases, it is very common to be misdiagnosed, even misdiagnosed multiple times. Once a correct diagnosis has been made, in adult years, looking back symptoms were present since childhood, but either brushed-off as growing pains, laziness, or told that they just needed to exercise more. It is especially difficult to get a diagnosis when symptoms are dynamic (exercise-induced), such as in muscle glycogenoses. Sitting in a doctor's office (at rest) or doing movements that only last a few seconds (within the time limit of the phosphagen system) the patient wouldn't display any noticeable abnormalities (such as muscle fatigue, cramping, or breathlessness).
Treatment:
A brief or only mildly elevated heart rate (heart rate taken while sitting down after recently walking across the room or getting up on the examination table) might be assumed to be due to anxiety or illness rather than exercise-induced inappropriate rapid heart rate due to an ATP shortage in the muscle cells. In the absence of severe symptoms (such as hepatomegaly, cardiomyopathy, hypoglycemia, lactic acidosis, myoglobinuria, rhabdomyolysis, acute compartment syndrome or renal failure), it is understandable that a disease would not be noticed by medical professionals for years, when at rest the patient appears completely normal.
Treatment:
Depending on what enzyme is affected, a high-protein or low-fat diet may be recommended along with mild exercise. It is important for people with metabolic myopathies to consult with their doctors for a treatment plan in order to prevent acute muscle breakdowns while exercising that lead to the release of muscle proteins into the bloodstream that can cause kidney damage.A ketogenic diet has a remarkable effect on CNS-symptoms in PDH-deficiency and has also been tried in complex I deficiency. A ketogenic diet has demonstrated beneficial for McArdle disease (GSD-V) as ketones readily convert to acetyl CoA for oxidative phosphorylation, whereas free fatty acids take a few minutes to convert into acetyl CoA. As of 2022, another study on a ketogenic diet and McArdle disease (GSD-V) is underway.For McArdle disease (GSD-V), regular aerobic exercise utilizing "second wind" to enable the muscles to become aerobically conditioned, as well as anaerobic exercise that follows the activity adaptations so as not to cause muscle injury, helps to improve exercise intolerance symptoms and maintain overall health. Studies have shown that regular low-moderate aerobic exercise increases peak power output, increases peak oxygen uptake (VO2peak), lowers heart rate, and lowers serum CK in individuals with McArdle disease.Regardless of whether the patient experiences symptoms of muscle pain, muscle fatigue, or cramping, the phenomenon of second wind having been achieved is demonstrable by the sign of an increased heart rate dropping while maintaining the same speed on the treadmill. Inactive patients experienced second wind, demonstrated through relief of typical symptoms and the sign of an increased heart rate dropping, while performing low-moderate aerobic exercise (walking or brisk walking). Conversely, patients that were regularly active did not experience the typical symptoms during low-moderate aerobic exercise (walking or brisk walking), but still demonstrated second wind by the sign of an increased heart rate dropping. For the regularly active patients, it took more strenuous exercise (very brisk walking/jogging or bicycling) for them to experience both the typical symptoms and relief thereof, along with the sign of an increased heart rate dropping, demonstrating second wind. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HiFive Unleashed**
HiFive Unleashed:
The HiFive Unleashed, or HFU is a single-board computer development board created by SiFive with the intention to increase exposure and adoption of the open-source RISC-V architecture.The HFU is capable of running the Debian Linux distribution and Quake II. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Q and R Holes**
Q and R Holes:
The Q and R Holes are a series of concentric sockets which currently represent the earliest known evidence for a stone structure on the site of Stonehenge.
Q and R Holes:
Beneath the turf and just inside the later Sarsen Circle is a double arc of buried stoneholes, the only surviving evidence of the first stone structure (possibly a double stone circle) erected within the centre of Stonehenge (Figs.1 & 2) and currently regarded as instigating the period known as Stonehenge Phase 3i. This phase may have begun as early as 2600 BC, although recent radiocarbon dates from samples retrieved from one of the sockets in 2008 during excavations by Darvill and Wainwright suggest a date of around 2400 to 2300 BC. They made a partial excavation of Q Hole 13, where 'associations with Beaker pottery' were noted.Although first encountered by William Hawley in the 1920s, it was Richard Atkinson who formally identified and named these irregular settings in 1954: "In choosing this designation, I had in mind John Aubrey’s frequent use, as a marginal note…of the phrase 'quaere quot' – 'inquire how many' – which seemed appropriate to the occasion". Their place at the beginning of the stone monument phase has been recognized from their stratigraphic relationships: in places they were cut through by both the settings of the later and still partly surviving Bluestone Circle, and also by a stonehole dug for one of the uprights of the Sarsen Circle.
Description:
The diameter of the outer (Q) circuit is c. 26.2 m and that of the inner (R) is, 22.5 m; with an average spacing between the paired stone settings of 1.5 m. These trench-like intrusions are roughly 2 m long and 1 m wide, set radially and slightly enlarged at each end to provide paired stone sockets to a depth of around 0.6 m, the intervening strip generally re-filled with chalk rubble. Atkinson described them as being ‘dumb-bell’ shaped, although not all were of this form. The bases of some sockets bore "the impressions…of heavy stones" some with "minute chips of dolerite [i.e. bluestone] embedded". While this does not imply that only bluestones were used in the Q and R structure, he found no evidence for sarsens. His accounts make it clear that he believed the sockets to have exclusively held bluestones "presumably the same stones that are still at Stonehenge".
Interpretation:
Atkinson estimated that if the Q and R Holes originally formed a complete circle that 38 pairs would have been present, although recent computer-modelling shows that there is room for 40.The Q and R Holes not only represent the foundation cuts for the first central stone construction, but they also were to include several additional stone settings on the northeast. This modified group face the midsummer sunrise with a possible reciprocal stone aligned on the midwinter sunset. This is the first evidence for any unambiguous alignment at Stonehenge (the solstice axis). The analysis of the spacing between the Q and R array, and that of the modified (inset) portal group (Fig.3) imply a shift from an angular splay of 9 degrees (i.e. 40 settings) to 12 degrees, the same as that of the later 30 Sarsen Circle.
Interpretation:
How long the bluestones remained in the Q and R settings before they were removed (if indeed this early structure was ever completed) is not known. However, the dates suggested from the 2008 excavation imply the Q & R arrays were perhaps no earlier than 2400 BC, presenting a challenge to the recently accepted Late Neolithic date for the construction of the iconic sarsen monument. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Raiden IV**
Raiden IV:
Raiden IV (雷電IV, Raiden Fō) is a 2007 vertical scrolling shooting video game developed by MOSS. It was first released in the arcades in Japan. A home conversion was produced for Xbox 360 in 2008. An updated arcade version was later released for Taito's NESiCAxLive digital distribution platform. Two more versions featuring new content were released: Raiden IV: OverKill for PlayStation 3 and Windows, and Raiden IV x MIKADO remix for Nintendo Switch, PlayStation 4, PlayStation 5, Windows, Xbox One, and Xbox Series X/S.
Gameplay:
The gameplay of Raiden IV is identical to the previous games. In each stage of increasing difficulty, players maneuver their fighter craft, engaging various enemies and avoiding their attacks. The Flash Shot mechanic, first introduced in Raiden III, returns in this game. Collectible items include weapon upgrade icons, bombs to cancel enemy attacks and deal damage to enemies over a large area, and score items such as medals and fairies.
Gameplay:
Plot The Crystals have returned again after numerous defeats against humanity. The VCD immediately launches a new model of the Raiden fighter, the Fighting Thunder ME-02 Kai, to stop the Crystals from taking over the Earth.
Development:
Location tests The first location test for Raiden IV was held at Akihabara Hey on July 22–23, 2006, on an Egret II system. This version had three difficulty levels and forced a different weapon for each player. The second location test was held again at Hey and at Taito Game World in Shinjuku on October 14–16. The third location test was held at High-Tech Sega in Shibuya and Taito Game World in Shinjuku on December 27. The version of the game used in this location test allowed players to select a weapon. The fourth and final location test was held at Shinjuku Gesen Mikado on February 20, 2007.
Development:
Releases Moss launched the arcade version of Raiden IV on June 7, 2007, while launching the official arcade website.In 2008, an Xbox 360 port came, which includes new stages, Xbox Live support, monitor rotation options, and downloadable content. The port was set to be released on September 11 by Moss, but was pushed back to October 2 as the game needed more polish and bug fixes.A version designed for the NESiCAxLive arcade download system was unveiled on February 22, 2011, in AOU2011. New features include perfect mode, which incorporates the seven-stage game from the Xbox 360 version of Raiden IV, and background music from the Ultimate of Raiden soundtrack. It has the Fairy character available for use.
Development:
A PlayStation 3 version was released in early 2014 as Raiden IV: OverKill. This version was the first official European release of the game, and adds two new stages, three different fighters (Fighting Thunder ME-02 Kai, Fighting Thunder Mk-II, Fairy), a new OverKill Mode, and a Replay&Gallery Mode. It was then ported to Windows and released by H2 Interactive worldwide on September 3, 2015.
Development:
A version titled Raiden IV x MIKADO remix was released for Nintendo Switch on April 22, 2021 in Japan, on May 6, 2021 in North America, and October 22, 2021 in Europe. It features remixed background music by various artists produced by Game Center Mikado. It was later ported to PlayStation 4 and PlayStation 5 in Japan, and was released worldwide by NIS America for PlayStation 4, PlayStation 5, Xbox One and Xbox Series X/S in early 2023.
Soundtrack:
Raiden IV -Ultimate of Raiden- is a video game soundtrack CD by INH. It includes Arcade, Xbox 360, and remixed versions of game music tracks from older and current Raiden games, with a total of 27 tracks. The OST was included with the X360 version of game for a limited time. INH has also offered a special PDF file DVD by pre-ordering from their site. The disc named Raiden IV Secret File, contains player ship specifications, enemy combat data, strategies for the game and concept art. This Secret File is also available from American distributor UFO Interactive Games via a code printed on the American version of the CD.
Reception:
Raiden IV has received mixed or average review scores upon its U.S. release, with both IGN and the Official Xbox Magazine scoring it a 6 out of 10. IGN's Eric Brudvig writes: "Though at first glance you might think there are 14 levels in Raiden IV ... there are in fact only seven with the second half of the game merely repeating the first.... UFO Interactive Games went ahead and added insult to injury with its use of downloadable content. After dishing out $40 for the game, you'll find that only one of the three ships on the main menu can be used. The other two must be purchased through Xbox Live". Backlash over the pay to play ships has created controversy at several gaming forums, leading gamers to wonder whether the extra content is worth the price to obtain them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acronis Secure Zone**
Acronis Secure Zone:
Acronis Secure Zone is a hard disk partition type created and used by Acronis True Image as a backup storage target.
Overview:
Backup applications typically use network storage for storing backup archives, but this can be problematic when such resources are not available. Acronis designed a solution to this problem by carving off part of the local disk as a proprietary partition, which they refer to as Acronis Secure Zone. Since this partition is accessibly only by True Image and Backup & Recovery, it functions as a backup target safe from malware, user files, or other uses or corruption. Acronis True Image can manage only one Acronis Secure Zone per computer but can restore data off others (e.g., when a portable hard drive is connected).
Technical Details:
Although the Acronis Secure Zone has its own partition type, it is actually just a rebadged FAT32 partition labeled ACRONIS SZ, with "partition type" code set to 0xBC. Knowing these requirements, one can manually create and/or manage existing Acronis Secure Zone using any partition manager. Since the Acronis Secure Zone is just a modified FAT32 partition type, it is possible to gain direct access to this partition by changing its partition type code to 0x0B (FAT32 LBA).
Technical Details:
Acronis True Image is designed to self-manage the backup archives stored to the Acronis Secure Zone. As such, all backup files are stored with autogenerated names in the root folder. If there is not enough free space for the next backup file, Acronis True Image will delete the oldest image set (base+incremental/differential files) in order to create space for the new files.
Original Equipment Manufacturer Secure Zone:
OEM versions of True Image are designed to use a special "Original Equipment Manufacturer secure zone", which is technically the same as a regular Acronis Secure Zone, but uses a partition type of 0xBB, and typically contains only a single image file with the "factory default" operating system and application configuration set forth by the manufacturer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metalloprotease inhibitor**
Metalloprotease inhibitor:
Metalloprotease inhibitors are cellular inhibitors of the Matrix metalloproteinases (MMPs). MMPs belong to a family of zinc-dependent neutral endopeptidases. These enzymes have the ability to break down connective tissue. The expression of MMPs is increased in various pathological conditions like inflammatory conditions, metabolic bone disease, to cancer invasion, metastasis and angiogenesis.
Metalloprotease inhibitor:
Examples of diseases are periodontitis, hepatitis, glomerulonephritis, atherosclerosis, emphysema, asthma, autoimmune disorders of skin and dermal photoaging, rheumatoid arthritis, osteoarthritis, multiple sclerosis, Alzheimer's disease, chronic ulcerations, uterine involution, corneal epithelial defects, bone resorption and tumor progression and metastasis. Due to the role of MMPs in pathological conditions, inhibitors of MMPs may have therapeutic potential. Several other proteins have similar inhibitory effects, however none as effective (netrins, procollagen C-terminal proteinase enhancer (PCPE), reversion-inducing cysteine-rich protein with Kazal motifs (RECK) and tissue factor pathway inhibitor (TFPI-2)). They might have other biological activities which have yet been fully characterised.MMP inhibitors can broadly be subdivided into non-synthetic (e.g. endogenous) or synthetic. Several potent MMP inhibitors have been identified, including hydroxymates, thiols, carbamoylphosphonates, hydroxyureas, hydrazines, β-lactams, squaric acids and nitrogenous ligands.There are three classes of commonly used inhibitors for metalloproteinases.
Metalloprotease inhibitor:
In vitro, EDTA, 1,10-phenanthroline and other chelating compounds lower the concentration of metal to the point where the metal is removed from the enzyme active site.
Classical lock and key inhibitors such as phosphoramidon and bestatin bind tightly by approximating the transition state of the hydrolysis of the peptide, preventing it from acting on other substrates.
Protein inhibitors such as α2-macroglobulin are known to work with metalloproteinases.
History:
The first generation of MMP inhibitors were based on the structure of the collagen molecule. This group of inhibitors contain a hydroxamate (-CONHOH) group that binds the zinc atom in the active site of the MMP enzyme. The first MMP inhibitors that were tested in patients were Ilomastat and Batimastat, hydroxamate-based MMP inhibitors. However, neither compound showed good oral bioavailability.Thus far, Periostat (active ingredient is doxycycline hyclate) is the only MMP inhibitor that has been approved by the U.S. Food and Drug Administration (FDA). It is used for the treatment of periodontitis. Other MMP inhibitors have exhibited serious side effects during preclinical trials. These side effects are caused by insufficient selectivity. Most MMP inhibitors are unable to target specific MMPs connected to specific pathological conditions. Instead, they inhibit multiple MMPs, some of which have protective functions or are not related to pathology.MMPs have been regarded as promising targets for cancer therapy. Preclinical studies investigating the efficacy of MMP suppression in tumor models were encouraging. Following these results, clinical studies were conducted but turned out to be disappointing. Recent studies have shown that MMPs may even have paradoxical roles in tumor progression. MMPs seem to have tumor-promoting effects as well as tumor suppressive effects dependent on different contexts.
Mechanism of action:
Most MMP inhibitors are chelating agents. The inhibitor binds to the zinc at the active center of the enzyme, thereby blocking its activity. Other inhibitor mechanisms are possible.α2-Macroglobulin (α2M) is a protease inhibitor which inhibits activated MMPs. α2M and MMP form a complex which is able to inactivate the MMP.MMPs are associated with the cell surface or bound to the extracellular matrix which prevents them from diffusing away and keeps the MMP under control of the cell. One mechanism to inhibit MMP activity is by dislodging the enzymes from their receptors. Gold salts bind to a heavy metal site distinct form the zinc-containing active center, which inhibits their activity. MMP activity can be decreased by binding to the cleavage site on the substrate e.g. catechin.Two molecular features of most MMP inhibitors are responsible for the affinity. One is a chelating moiety that interacts with the zinc ion and the other is a hydrophobic extension from the catalytic site that project into S1’ pocket (P1’ group) of the metalloproteinase. The structural difference MMPs’ is mainly in the S1’ side and by modifying the P1’ group, inhibitor selectivity can be developed.
Drug development:
Various potential MMP inhibitors will be explained in the following sections, including information on their development, structure-activity relationship and pharmacokinetics.
Drug development:
Pioneering hydroxamate structures The first generation of MMP inhibitors were based on the structure of the collagen molecule. In the design of these inhibitors, the basic protein backbone of collagen is maintained but the amide bond is replaced with a zinc-binding group. This group of inhibitors contain a hydroxamate (-CONHOH) group that binds the zinc atom in the active site of the MMP enzyme, therefore this group is called "hydroxamate-based MMP inhibitors“. An example can be seen in Marimastat, a first generation inhibitor, which has a similar backbone and sidechain format to collagen.
Drug development:
Ilomastat and batimastat were the first two MMP inhibitors to be tested in patients. These are both hydroxamate-based MMP inhibitors and have similar overall structures.
Drug development:
The hydroxamate-based MMP inhibitors display an excellent anticancer activity in tumor cells but the clinical performances of these compounds were disappointing. A factor contributing to this disappointment was that they are broad-spectrum inhibitors of many MMP sub-types that can in many cases also inhibit members of the ADAMs protease family. When they were tested in patients they induced dose-limiting muscular and skeletal pain in a number of the patients. Only when the structures of the MMP inhibitors could be adjusted to impart selectivity and abolish toxicity, would they achieve clinical impact in cancer chemotherapy.
Drug development:
New generation hydroxamate-based inhibitors The pioneering hydroxamate-based inhibitors were followed by a set of 'new generation' molecules with features including a substituted aryl, a sulfonamide and a hydroxamate zinc-binding group.
Drug development:
In MMI-270 there is also an amino acid sidechain-type substituent on the carbon that is α to the hydroxamate, along with a sidechain on the sulfonamide (which was later shown to be unnecessary). The N-arylsulfonyl-α-aminoacid hydroxamate of MMI-270 mimics the marimastat succinate motif. Cipemastat, which was developed as an MMP-1, -3 and -9 collagenase inhibitor for the treatment of rheumatoid- and osteo-arthritis, also has the marimastat succinate motif. Its clinical trial was terminated prematurely.
Drug development:
MMI-166 has an N-arylsulfonyl-α-aminocarboxylate zinc-binding group, different from the hydroxamate-zinc binding group seen in MMI-270 and Cipemastat. It also has a triaryl substitution that the other structures didn‘t have. ABT-770 and Prinomastat also have an aryl substitution. In ABT-770 the two phenyl rings are directly connected but in Prinomastat the two phenyl rings are connected by an oxygen atom, forming a diphenylether. These three permutations direct the SAR away from MMP-1 and toward the „deep pocket“ MMPs such as the gelatinases. ABT-770 shows anticancer activity in animal models, but it is easily metabolised to an amine metabolite that causes phospholipidosis. MMI-166 has shown anticancer activity in numerous animal models, but there is no data available of its clinical performance. Prinomastat on the other hand, is one of the best studied MMP inhibitors. It showed excellent preclinical animal anticancer efficacy, but a recurring limitation to these hydroxamates (Prinomastat in particular) is drug metabolism including loss of the hydroxamate zinc-binding group.
Drug development:
These inhibitors were followed by the next group of hydroxamate-based inhibitors, which focus on the suppression of metabolism, minimization of MMP-1 inhibitory activity and the control of subtype selectivity, by structure-based design. The tetrahydropyran in RS-130830 introduces a steric block that suppresses metabolism, which would fix the problem that the previous generation of inhibitors showed. The outcome of its clinical evaluation has not yet been disclosed. 239796-97-5 has improved ADME and MMP-1 selectivity properties and has shown excellent oral efficacy in an animal model of osteoarthritis. Although, the therapeutic objectives for these inhibitors is not cancer, like it has been for most of the MMP inhibitors.
Drug development:
New generation thiol-based inhibitors Rebimastat is a broad spectrum MMP inhibitor with a thiol zinc-binding group. It has oral bioavailability and is a collagen non-peptide mimetic. Rebimastat has some selectivity as it doesn't inhibit all the MMPs operations. The metalloproteinases that release TNF-alpha, TNF-II, L-selectin, IL-1-RII and IL-6 are for example not inhibited by Rebimastat. In phase I of clinical trials, there was no sign of dose-dependent joint toxicity and a disease stabilization. Arthralgia was noted in phase II early breast cancer trials, which was connected to MMP inhibitor toxicity. Rebimastat was used in a Paclitaxel/Carboplatin treatment in phase III. The results of the trial was a higher incidence of adverse reactions, without survival benefit.
Drug development:
Clinical trials for Tanomastat, an alfa-((phenylthio)methyl)carboxylate, showed similar results. It showed good disease stability and tolerance in Phase I solid tumor trials and good tolerance in advanced cancer in combination with Etoposide. However, its efficacy was not proven to be adequate. Tanomastat showed significant hepatotoxicity in a cancer therapy combined with Cisplatin and Etoposide, although in a treatment with Doxorubicin it showed good tolerance and lowered toxicity with 5-fluorouracil and Leucovorin.Many compounds in the thiol zinc-binding groups have good water solubility and are air stable in plasma and these groups will be continued in MMP inhibitor designing.
Drug development:
Pyrimidine-based inhibitors Ro 28-2653 is highly selective for MMP-2, MMP-9 and membrane type 1 (MT-1)-MMP. It is an antitumor and antiangiogenic agent with oral bioavailability. Inhibition of TACE and MMP-1 are linked to the musculoskeletal side effects seen in hydroxamate metalloproteinase inhibitors, but this compound spares the enzymes. It has been shown to diminish tumor growth in nasal cancer in rats as well as prostate cancer cell cultures. The compound only has moderate effect on mice's adipose tissue and no alteration on joints. Based on this, it is concluded that class of inhibitors is less likely to trigger neuromuscular adverse effects. On the active site of the structure is a pyrimidinetrione chelation and the phenyl and piperidynil section occupy the S1’ and S2’ binding pockets of MMP-8.
Drug development:
Compound 556052-30-3 is similar to Ro 28-2653 but incorporates a 4-((2-methylquinolin-4-yl)methoxy)phenyl sidechain that is TACE selective. 5-(spiropyrrolidin-5-yl)pyrimidinetrione is a compound named 848773-43-3 that is a potent MMP-2, MMP-9 and MMP-13 inhibitor that spares MMP-1 and TACE. By substituting 1,3,4-oxadiazol-2-yl heteroaryl at C-4’ of the diphenylether segment to accomplish MMP-13 selectivity over MT-1 MMP, made the compound 420121-84-2. The compound has IC50 (half maximal inhibitory concentration) of 1 nM for MMP-13.
Drug development:
I125-radioable pyrimidinetriones that have similar structure have been made to be used in MMP-9 elevated atherosclerosis and elevated MMP-2 and MMP-9 cancers. This class of MMP inhibitors is easy to synthesize and are potent enough for clinical valuation. Compound 544678-85 is the latest pyrimidine based inhibitor, the compound is a pyrimidine-4,6-dicarboxamide that is very potent and MMP-13 selective. The compound has a specificity loop that within the S1’ pocket and its 3-methyl-4-fluoro group is proximal enough to the zinc to change the water entity. These compounds have good oral bioavailability and properties that promote them to be a good candidate for a subtype inhibitor of MMP-13 based diseases and future development.Pyrimidine dicarboxamides are highly selective MMP-13 inhibitors. In the S1’ pocket of MMP-13 is an S1’ side pocket that is unique to the matrix metalloproteiase. Pyrimidine dicarboxamides bind to this side pocket, which increases the selectivity. The role of MMP-13 is cleaving fibrillar collagen at neutral pH and higher mRNA levels of MMP-13 is detected in breast carcinoma and osteoarthritis joints.
Drug development:
The pyrimidine dicarboxamide inhibitor example in the image does not interact with the catalytic zinc ion but rather binds to the S1’ side pocket. One pyridyl arm is situated to the entrance of the S1’ pocket while the other pyridyl arm goes through the S1’ pocket into the side pocket.
Drug development:
Hydroxypyrone-based inhibitors Potent and selective MMP-3 inhibitors have been developed by using a hydroxypyrone as the zinc binding group. By attaching an aryl backbone to the 2-position of the pyrone ring, more selectivity was gained. On the hydroxypyrone ring, three positions are available to attach backbones, position 2-, 5- and 6-.Hydroxypyrone-based MMP inhibitors are structurally corresponding to the pyrimidinetriones. A recent inhibitor is the compound 3-hydroxypyran-4-one nominated 868368-30-3. It is MMP-3 selective and its 0,0-bidentate chelation of zinc is the structural part proposed to be responsible for the MMP recognition.
Drug development:
Phosphorus-based inhibitors Investigation on MMP inhibitors with phosphorus based zinc binding groups focused on α-biphenylsulfonylamino phosphonates. These inhibitors bind through two phosphonate oxygen atoms. Phosphonate inhibitors have been developed that exhibit selectivity for MMP-8 over other MMPs. Selective MMP-8 inhibitors could be useful in the treatment of acute liver disease and multiple sclerosis Phosphinic MMP inhibitors have been reported to target MMP-11 and MMP-13. MMP-13 plays a role in cartilage degradation in osteoarthritis. These phosphinate MMP inhibitors contain phenyl segments that are thought to be responsible for the selectivity to MMP-13. The phosphinic group of those inhibitors (R1R2 (O)OH) binds as a zinc ligand. R1 and R2 substituents affect the inhibition potency.Phosphinate inhibitors have been developed that showed high selectivity for MMP-11. Derivatives based on phenyl rings showed the best selectivity. MMP-11 could be a useful target for tumorgenesis in breast cancer.
Drug development:
Phosphorus-based inhibitors with carbamoyl phosphonate zinc binding groups do not bind through two oxygen of the phosphonate. Carbamoyl phosphonate zinc binding groups binds Zn2+ through the oxygen of the phosphonate and the oxygen of the alpha carbonyl to the phosphonate. This binding forms a 5-members chelate ring that looks similar to the binding of hydroxamic acid.The amide bond of the carbamoyl phosphonate provide a hydrogen bond donor for protein interactions and the amide group has an electron donating ability that provides strong Zn2+ chelation.
Drug development:
The carbamoyl phosphonate zinc binding groups have a net negative charge that hinders cell penetration of these inhibitors and restricts them to the extracellular space. This cell penetration prevention contributes to the low toxicity of these inhibitors. Inhibitors with a carbamoyl phosphonate zinc-binding group are selective for MMP-2. MMP-2 could be a useful target for tumor invasion and angiogenesis. A carbamoyl phosphonate inhibitor has been developed that affects MMP-2 and MMP-9 sparing other MMPs. This compound showed inhibitory activity on cell invasion and tumor colonization. In in vivo studies, this inhibitor showed efficacy with oral dosing and administration into the abdominal cavity (intraperitoneal). It shows slow absorption, rapid elimination and low oral bioavailability. Prolonged absorption contributes to sustained efficacy. Inhibitors with carbamoyl phosphonate zinc binding groups are water-soluble at physiological pH.
Drug development:
Tetracycline-based inhibitors Tetracyclines are antibiotics that also exhibit MMP inhibitory activity. They chelate Zn2+ ion, thereby inhibiting MMP activity. It is believed that tetracyclines also effect MMP expression and proteolytic activity.Doxycycline is a semi-synthetic tetracycline that has been studied for dental and medical applications. Its effects on diseases like periodontitis and cancer has been investigated. Doxycycline is nearly completely absorbed with a bioavailability about 95% in average and a 20% reduction with co-administration of food. Its volume of distribution is 50–80 L (0,7 L/kg). Protein binding is 82–93 %. It is excreted in urine and in feces. Doxycycline is available in oral and intravenous form. Doxycycline exhibited inhibitory activity on MMP-2 and MMP-9. The expression and activity of MMP-2 and MMP-9 are often elevated in human cancer. The increased expression and activity correlates with advanced tumor stage, increased metastasis and prognosis.Chemically modified tetracyclines (CMT) have been developed to explore their inhibitory potential. Most studies in tetracyclines and CMTs showed that they can inhibit MMP activity.
Drug development:
One CMT called COL-3 or metastat has been demonstrated to be a potent MMP inhibitor. COL-3 features a tetracycline scaffold that is unsubstituted on positions C4–C9.Advantages of CMT over conventional tetracyclines are that chronic use does not result in gastrointestinal toxicity and higher plasma levels can be achieved for extended time span reducing administration frequency. The pharmacokinetics of COL-3 has been studied in rats. COL-3 is absorbed slowly from the gastrointestinal tract. 3% are excreted through the urinary tract while 55–66% is excreted in feces. The drug is highly lipophilic and able to cross the blood brain barrier at higher doses. COL-3 accumulates in higher concentration in heart tissue and testis.
Drug development:
In clinical trials plasma protein binding has been shown to be high (~94,5%). Most COL-3 binds to serum albumin.
Endogenous inhibitors MMP activity is regulated at various levels for example by endogenous inhibitors like α2-macroglobulin and the tissue inhibitors of metalloproteinases (TIMPs). α2-macroglobulin regulates a broad spectrum of proteases, while TIMPs are more specific endogenous MMP inhibitors.
Drug development:
α2-macroglobulin is an abundant plasma protein that acts in tissue fluids. The plasma glycoprotein consists of four subunits. α2-macroglobulin does not inhibit the activation of MMPs or the MMPs themselves. It entraps proteinases like MMPs and forms a complex with them. The complex is endocytosed and cleared by a low-density lipoprotein-receptor-related protein.In humans, 4 different TIMPs have been found. They are secreted proteins of low-molecular weight. TIMPs bind noncovalently to the active site of MMPs. Changes of TIMP levels are considered to play a role in pathological conditions associated with unbalanced MMP activities. TIMPs consist of 184-194 amino acids. These inhibitors are subdivided into two domains N-terminal and C-terminal. The N-terminal regions of the four TIMPs share a common structure. They all contain twelve cysteine residues that form six disulfide bonds. These bonds are critical for the conformation of the N-terminal and its MMP-inhibitory activities. The C-terminals of the TIMPs differ from each other. The N-terminal subunit is capable of inhibiting MMPs. The TIMP molecule form fits into the active site of an MMP. The TIMP contact the catalytic cleft of the MMP in a similarly as a substrate. TIMPs inhibit all MMPs except TIMP-1 which does not inhibit MT-1-MMP.There are some differences in the inhibitory preferences of TIMPs. TIMP-1 for example favors to inhibit MMP-9. Other examples are TIMP-2 and TIMP-4 which are more potent MMP-2 inhibitors than MMP-9 inhibitors.TIMPs could potentially be useful against illnesses like cardiovascular disease and cancer. The application of TIMPs as therapeutic instrument through gene therapy or direct protein application is still in early stages of development.
Drug development:
It is preferable to inhibit specific MPPs that play a role in pathological conditions. Since TIMPs inhibit multiple MMPs it is desirable to develop engineered TIMPs with altered specificity.
Current status:
The primary goal of MMP inhibitor design is selectivity. The targeting of specific MMPs is expected to improve efficacy and prevent side effects like musculoskeletal syndrome (MSS). 3D structures of MMP inhibitors provide a source of insight of the structural relationships for selectivity. High throughput screening can as well increase the chances of discovering inhibitors with high selectivity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Retro screening**
Retro screening:
Retro (or reverse) screening (RS) is a relatively new approach to determine the specificity and selectivity of a therapeutic drug molecule against a target protein or another macromolecule. It proceeds in the opposite direction to the so-called virtual screening (VS). In VS, the goal is to use a protein target to identify a high-affinity ligand from a search library typically containing hundreds of thousands of small molecules. In contrast, RS employs a known drug molecule to screen a protein library containing hundreds of thousands of individual structures (obtained from both experimental and modeling techniques). Accordingly, the extent to which this drug cross-reacts with the human proteome provides a measure of its efficacy and the potential long-term side-effects. RS is expected to play a key role in providing an additional layer of quality control in drug discovery. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Public interface**
Public interface:
In computer science, a public interface is the logical point at which independent software entities interact. The entities may interact with each other within a single computer, across a network, or across a variety of other topologies.
It is important that public interfaces will be stable and designed to support future changes, enhancements, and deprecation in order for the interaction to continue.
Design:
Guidance A project must provide additional documents that describe plans and procedures that can be used to evaluate the project’s compliance.
architecture design document.
coding standards document.
software release plan document.
document with a plan for deprecating obsolete interfaces.The programmer must create fully insulated classes and insulate the public interfaces from compile-time dependencies.
Best practices Present complete and coherent sets of concepts to the user.
Design interfaces to be statically typed.
Minimize the interface’s dependencies on other interfaces.
Express interfaces in terms of application-level types.
Use assertions only to aid development and integration.
Example C++ interface Use protocol classes to define public interfaces.
The characteristics of a protocol class are: It neither contains nor inherits from classes that contain member data, non-virtual functions, or private (or protected) members of any kind.
It has a non-inline virtual destructor defined with an empty implementation.
All member functions other than the destructor, including inherited functions, are declared pure virtual and left undefined.
Benefits The benefits of using protocol classes include: Insulating applications from the external client Insulating changes that are internal to the interface Insulating changes to the public interface from changes to the implementation of the interface Insulation has costs, but these tend to be outweighed by the gains in interoperability and reusability.
Costs: Going through the implementation pointer Addition of one level of indirection per access Addition of the size of the implementation pointer per object to memory requirements
Other information:
Various methodologies, such as refactoring, support the determination of interfaces. Refactoring generally applies to the entire software implementation, but is especially helpful in properly flushing out interfaces. There are other approaches defined through the pattern community. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**10-Deacetylbaccatin**
10-Deacetylbaccatin:
10-Deacetylbaccatins are a series of closely related natural organic compounds isolated from the yew tree (Genera Taxus). 10-Deacetylbaccatin III is a precursor to the anti-cancer drug docetaxel (Taxotere). 10-deacetylbaccatin III 10-O-acetyltransferase converts 10-deacetylbaccatin to baccatin III: acetyl-CoA + 10-deacetylbaccatin III ⇌ CoA + baccatin III | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Peripheral vision horizon display**
Peripheral vision horizon display:
The peripheral vision horizon display, also called PVHD or the Malcolm Horizon (after inventor Dr. Richard Malcolm), is an aircraft cockpit instrument which assists pilots in maintaining proper attitude.
Peripheral vision horizon display:
The PVHD was developed in the mid-1970s and manufactured in the early 1980s as a cockpit instrument to assist the pilot with being better aware of the aircraft attitude at all times. The development of the PVHD was driven by a high incidence of military aircraft accidents due to "attitude awareness issues." The PVHD was noted to have a subliminal effect on the pilot because in actual use the display was set so dim that it could barely be seen.
Peripheral vision horizon display:
The PVHD was well received by pilots that tested it in helicopters as well as fixed-wing aircraft. It was flown in F-4s and A-10s, as well as helicopters. Initial production in 1983, however, was for the SR-71 Blackbird as an aid when refueling in the air.The initial concept demonstration was done in Canadian military laboratories and later development was undertaken by Varian Canada in Georgetown, Ontario. In 1981, Varian sold the project to Garrett Manufacturing in Rexdale, Toronto, Ontario.
Function:
In the simplest variant, the PVHD projects a dim line of light across the full width of the cockpit instrument panel. This line is projected over the top of all instruments. As the aircraft pitches and rolls, the line appears to stay parallel to the horizon outside of the aircraft. There is a small blip in the center of the line to indicate which way is up.
Function:
In actual use, the pilot initially sets the brightness of the line so that it just disappears when looking at it with their central vision. When the line does move due to an aircraft attitude change, the peripheral vision, being more sensitive to movement, picks up the movement and the brain subconsciously registers the information, and makes use of it.
Function:
In all variants, the aircraft gyro system provides pitch and roll information for the processor, which drives the projection system to keep the line parallel to the earth horizon. The subliminal effect on the pilot's peripheral vision aids them in retaining attitude awareness and quickly correcting the onset of the aircraft deviating from the desired attitude.
Benefits:
The PVHD helps when the real world horizon is blocked by weather or darkness, and the cockpit workload is so high that full attention cannot be given to the standard attitude instrument. The situation can be made worse by inertial effects of the aircraft fooling the pilot's organs of balance. These inertial effects can cause somato-gravic or somato-gyral illusions. In short, the pilot gets the wrong understanding of the aircraft attitude, often with a fatal outcome.
Variants:
Several variants were built. The concept demonstration was done with conventional optics that projected a white line from a xenon arc lamp. The projector was driven by an analog computer and the lamp (line) was moved by servo motors.
Variants:
A later production version used a microprocessor to sample and process the pitch/roll gyro information and a HeNe laser as a light source for the projector. The projector consisted of X and Y axis galvanometers to scan the line across the cockpit at more than 30 times per second in the form of a vector scanned display. This type of projection technology is now commonly used in laser light shows.
Variants:
Lockheed SR-71 The Lockheed SR-71 "Blackbird" reconnaissance aircraft was fitted with a PVHD system. The system also included a heading indication, using varying light intensities along different segments of the horizon line.
Fairchild Republic YA-10B During the development of the single-seat night-attack version of the A-10 Warthog aircraft a PVHD system similar to that of the Lockheed SR-71 was incorporated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jaguar (software)**
Jaguar (software):
Jaguar is a computer software package used for ab initio quantum chemistry calculations for both gas and solution phases. It is commercial software marketed by the company Schrödinger. The program was originated in research groups of Richard Friesner and William Goddard and was initially called PS-GVB (referring to the so-called pseudospectral generalized valence bond method that the program featured).
Jaguar is a component of two other Schrödinger products: Maestro, which provides the graphical user interface to Jaguar, and a QM/MM program QSite, which uses Jaguar as its quantum-chemical engine. The current version is Jaguar 10.4 (2020).
Features:
A distinctive feature of Jaguar is its use of the pseudospectral approximation. This approximation can be applied to computationally expensive integral operations present in most quantum chemical calculations. As a result, calculations are faster with little loss in accuracy.The current version includes the following functionality: Hartree–Fock (RHF, UHF, ROHF) and density functional theory (LDA, gradient-corrected, dispersion-corrected, and hybrid functionals) local second-order Møller–Plesset perturbation theory (LMP2) generalized valence bond perfect-pairing (GVB-PP) and GVB-LMP2 calculations prediction of excited states using configuration interaction (CIS) and time-dependent density functional theory (TDDFT) geometry optimization and transition state search solvation calculations based on the Poisson–Boltzmann equation prediction of infrared (IR), nuclear magnetic resonance (NMR), ultraviolet (UV), and vibrational circular dichroism (VCD) spectra pKa prediction generation of various molecular surfaces (electrostatic potential, electron density, molecular orbitals etc.) prediction of various molecular properties (multipole moments, polarizabilities, vibrational frequencies etc.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ferro-actinolite**
Ferro-actinolite:
Ferro-actinolite is the ferrous iron-rich endmember of the actinolite-tremolite continuous solid solution series of the double chain calcareous amphibole group of inosilicate minerals. All the series members belong to the monoclinic crystal system.
The following formula comparison indicates the position of individual well-known members within the series: tremolite: ☐Ca2(Mg5.0-4.5Fe2+0.0-0.5)Si8O22(OH)2 actinolite: ☐Ca2(Mg4.5-2.5Fe2+0.5-2.5)Si8O22(OH)2 ferro-actinolite: ☐Ca2(Mg2.5-0.0Fe2+2.5-5.0)Si8O22(OH)2Some other substitute cations that may replace either Ca, Mg, or Fe include potassium (K), aluminium (Al), manganese (Mn), titanium (Ti), and chromium (Cr). A fluorine (F) anion may partially replace the hydroxyl (OH).
Physical properties:
Ferroactinolite prisms are much darker in color than actinolite due to their higher iron content affecting opacity, but may be dark green in thin slices or around the edges. Its crystals are brittle, with a hardness of 5–6 on the Mohs scale, and have a white streak. Ferroactinolite is pleochroic and has a higher refractive index and surface relief than actinolite. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Outsiders (comics)**
Outsiders (comics):
The Outsiders are a superhero team appearing in American comic books published by DC Comics. As their name suggests, the team consists of superheroes who do not fit the norms of the "mainstream" superhero community, i.e. the Justice League.The Outsiders have had a number of different incarnations. They were founded by Batman, whose ties to the League had become strained at the time, and introduced the now-classic line-up of Batman, Black Lightning, Metamorpho, Geo-Force, Katana, Halo and Looker. A later incarnation of the Outsiders from the early 2000s comics was led by Nightwing and Arsenal following the dissolution of the Teen Titans superhero group, and depicted the team as a pro-active group hunting for supercriminals. For the team's third incarnation, Batman reforms the team as a special strike team featuring classic members Katana and Metamorpho alongside new recruits such as Catwoman and Black Lightning's daughter Thunder. After the Batman R.I.P. storyline, Alfred Pennyworth acts on Batman's instructions to reassemble the team once more, recruiting new members and more of the team's original lineup.Another version of the team with a familiar line-up briefly featured in Batman Incorporated in 2011 as the black ops section of Batman's organization. Following DC's 2011 reboot, a new version of the Outsiders is introduced in the pages of Green Arrow as a secret society represented by seven weapon-themed clans. Members in this incarnation include Katana, Onyx, and several new characters. The original Outsiders are returned to continuity in 2017, following DC Rebirth, once again as a secret team founded by Batman; Batman revives the team with a new line-up in 2018. Black Lightning leads another incarnation in 2022.
Outsiders (comics):
A version of the team appears in the live action series Black Lightning, fully formed starting in the third season led by Black Lightning.
Fictional history:
Batman and the Outsiders / The Adventures of the Outsiders (1983–1987) The Outsiders first appeared in a special insert in the final issue (#200) of The Brave and the Bold in 1983. The team was given its own comic, Batman and the Outsiders, which debuted in August 1983. It was created and originally written by Mike W. Barr and illustrated by Jim Aparo (later illustrated by Alan Davis).
Fictional history:
After Batman left the group in issue #32, the title was changed to The Adventures of the Outsiders, continuing until its cancellation with issue #46. Issue #38 featured the last original story in the series, as issues #39-46 were reprints of stories from the companion series The Outsiders (1985).
The cast of the Outsiders was notable for having mostly new characters (Geo-Force, Katana, Halo and Looker). The other members were two characters who refused membership in the Justice League (Black Lightning and Metamorpho) and former Leaguer Batman.
Fictional history:
Markovia and Baron Bedlam The Outsiders formed in the fictional East European country of Markovia, which was ravaged by war at the time. Batman had attempted to enlist the Justice League of America's aid, but was told they had been ordered to stay out of the conflict. Because he disagreed with the order, Batman resigned to strike out on his own. He and Black Lightning traveled to Markovia to free captive Lucius Fox from Baron Bedlam (who killed the country's ruler, King Viktor). One of the king's sons became Geo-Force after gaining powers from Markovia's top scientist (Dr. Helga Jace) to stop Bedlam. Metamorpho was searching for Dr. Jace for the doctor to help him with his powers. Katana arrived in Markovia to kill General Karnz (Bedlam's military commander) as vengeance for her family's death. Batman found a young, amnesiac girl in the woods exhibiting light-based powers whom he names Halo who was an Aurakle that possessed the body of Violet Harper after she was killed by Syonide. These heroes banded together to defeat Baron Bedlam and decided to stay together as a team, later fighting such villains as Agent Orange, the Fearsome Five and the Cryonic Man.
Fictional history:
The Masters of Disaster and the Force of July Recurring foes include the Masters of Disaster (New Wave, Shakedown, Windfall, Heatstroke, and Coldsnap), who at one point were almost able to kill Black Lightning. Windfall eventually became disenchanted with her team and joined the Outsiders. Another recurring opponent was the Force of July, a group of patriotic metahumans who also regularly came into contact with the Suicide Squad. During this time, Geo-Force's half-sister Terra died as a traitor against the Teen Titans. Batman revealed his real identity as Bruce Wayne to the team (although they already knew it). Eventually, Halo's origins were revealed. Emily Briggs (who later became the superheroine Looker and joined the team) was introduced. Denise Howard (the love interest of Geo-Force) appeared for the second time.
Fictional history:
Without Batman Baron Bedlam later returned to life. With the assistance of the Bad Samaritan, the Masters of Disaster and Soviet forces, he again tried to seize control of Markovia. Batman withheld this information, angering the rest of the team. This eventually led to Batman disbanding the team and returning to the Justice League of America. Nevertheless, the team traveled to Markovia, discovering many Markovian military casualties. They were defeated by the Masters and learn that Bedlam cloned Adolf Hitler; however, the Hitler clone committed suicide in horror of the atrocities perpetrated by the original. The Outsiders became unofficial agents of Markovia to receive Markovian funding. They moved to Los Angeles; Geo-Force left his girlfriend Denise behind and Looker separated from her husband.
Fictional history:
Outsiders (1985–1988) This series again featured the original group, and was printed in the Baxter paper format used on such titles as The New Teen Titans (vol. 2) and the Legion of Super-Heroes (vol. 3). It lasted for 28 issues, in addition to Annuals and special issues. The series originally ran alongside the title The Adventures of the Outsiders, chronicling events a year after that series. In the end, the first few issues of this series were reprinted in The Adventures of the Outsiders before that title was cancelled.
Fictional history:
Story The team moves into a new headquarters in Los Angeles and once again becomes involved in an adventure with the Force of July (ending in Moscow). Villains such as the Duke of Oil and the Soviet super-team the People's Heroes are introduced during this time. The team's adventures take them all over the globe, most notably when the Outsiders' plane is shot down and the team is marooned on a deserted island for three weeks. Tensions rise as Geo-Force tries to resign his leadership and he and Looker succumb to temptation. Eventually, the team is rescued.More trouble arises when a detective is hired to look into Looker (now working as a model known as Lia Briggs) and her private life, and learns of her actual identity as Emily. The detective tries to blackmail her, but she hypnotizes him and forces him to leave. However, he is killed shortly afterward and Looker is arrested as a suspect. The Outsiders, fortunately, clear her name.
Fictional history:
Reunion with Batman The Outsiders are reunited with Batman when they band together to fight Eclipso. After the adventure, Batman gives them access to a batcave in Los Angeles. The team is also infiltrated by a clone of Windfall. Meanwhile, Looker and Geo-Force feel guilty about their affair and eventually end it. Metamorpho faces his own personal problems with his wife, Sapphire Stagg-Mason. The clone of Windfall is ultimately killed; the Masters of Disaster are defeated, as the real Windfall joins the Outsiders. The team also meets the other Los Angeles-based team, Infinity, Inc.
Fictional history:
Millennium The team is next involved with the crossover event Millennium, wherein it is revealed that Dr. Jace is an operative of the villainous Manhunters and kidnaps the team. The team (now joined by the Atomic Knight) free themselves, but Dr. Jace blows up herself and Metamorpho. Looker is called to return to Abyssia (the origin of her powers), where she must also face the Manhunters. During the adventure, she is drained of much of her power and returns to her normal form. Halo is hit in the crossfire when saving Katana's life, and slips into a coma as Katana vows to look after her. The team is disbanded by Geo-Force as Looker returns to her husband, and Batman rejoins the Justice League.
Fictional history:
Outsiders (vol. 2) (1993–1995) This revival of the title in 1993 lasted 25 issues and was written by Mike W. Barr, with most issues penciled by Paul Pelletier.
Fictional history:
Story Declared a traitor in his native Markovia, Geo-Force is forced to seek the help of old (and new) Outsiders to battle the vampire-lord who controls his country. This is later coupled with the framing of the Outsiders for the slaughter of a Markovian village, forcing them into hiding. This fugitive status motivates the Atomic Knight to go after them, hoping to bring in his former allies without too much trouble. He is eventually convinced of their innocence and joins them.
Fictional history:
The new members who join the team in Markovia are the magician Sebastian Faust, the warsuit-wearing engineer and industrialist the Technocrat and Wylde (Charlie Wylde), a friend of the Technocrat who has been turned into a mountain bear by Faust's uncontrollable magic.
Fictional history:
During the initial confrontation with the vampires, Looker is (apparently) killed. Hiding out in Gotham City, the Outsiders experience another loss as the Technocrat's wife Marissa and Halo are killed during a fight with Batman (actually the man standing in for Bruce Wayne, Jean-Paul Valley). However, Halo's spirit survives in the reanimated body of Marissa. For some time afterward, the Technocrat has trouble accepting that his wife (whose body is still walking around) is dead. Eventually it is discovered that Looker is not dead, but undead. The Outsiders find her, and free her from the vampire's control.
Fictional history:
Split in two After the defeat of the vampires, two teams (one composed of Geo-Force, Katana, and the Technocrat; the other composed of the Eradicator, Looker, Wylde, Halo and Sebastian Faust) claim the name of the Outsiders; both teams are considered fugitives for some time, thanks to questionable tactics by their new members. During this time, the teams learn that Halo's (original) body has been brought back to life by the terrorist organization Kobra. In control of her body is Violet Harper, the evil woman whose body Halo originally inhabited. She now has abilities similar to Halo's, calls herself Spectra and joins Strike Force Kobra with Dervish and Windfall. Both Kobra and Violet Harper are defeated, and Windfall rejoins the Outsiders.
Fictional history:
The two teams unite to confront Felix Faust, the father of Outsiders member Sebastian Faust. During the confrontation, the bear-like Wylde betrays the team when Felix promises to restore his humanity. The team defeats Felix Faust and Wylde, who eventually becomes an actual bear (without the ability to speak) and is kept in a zoo. The title ends with the clearing of the Outsiders' names and the marriage of Geo-Force and Denise Howard.
Fictional history:
In the interim, the Halo entity is restored to Violet Harper's body, returning her to normal off-panel and a new team of Outsiders is formed and seen as active during the Day of Judgement crossover event. Members of this new team include Geo-Force, Halo, Katana, and Terra II, who in the 1999 Titans Secret Files series, left the team after a round of genetic tests performed by scientists failed to decipher Terra II's genetically altered DNA to tell who she was prior to being turned into a genetic doppelgänger of the original Terra.
Fictional history:
Outsiders (vol. 3) (2003–2007) Outsiders (vol. 3) is almost completely unrelated to the previous series. It was launched in 2003 with new members, some of whom had been part of the Titans. The series was cancelled with issue #50 and relaunched as Batman and the Outsiders (vol. 2), featuring a mix of current and new members.
Fictional history:
Formation The new team is put together in the wake of the Titans/Young Justice: Graduation Day crossover, which dissolves both groups. Arsenal accepts a sponsorship offer from the Optitron Corporation and uses the money to buy an enormous bomb shelter which had belonged to a multimillionaire, renovating it as group headquarters. He recruits a group of young heroes, the last of whom is his friend Nightwing (who joins reluctantly). Nightwing decides that, instead of functioning in a reactive capacity like most other superhero teams, this group should act as hunters, tracking down supervillains before they can cause problems.
Fictional history:
Infinite Crisis Former Outsiders the Technocrat and Looker are near the Breach when he explodes in the Battle of Metropolis. The fate of the Technocrat remains unclear, while Looker soon appears in an issue of the World War III limited series. Roy Harper is saved by Superman from Doomsday, and Captain Marvel Jr. was sent to Earth-S when it was reformed. When New Earth came into existence, he went with other heroes who could fly to fight Superboy-Prime. In the Infinite Crisis hardcover, Freddy joined alongside the other Titans to take down the members of the Secret Society of Super Villains who tried to kill Robin.
Fictional history:
One Year Later After Infinite Crisis, the Outsiders are "officially" no more. Because of the Freedom of Power Treaty, the Outsiders have been operating covertly outside of the United States. Most of the members were presumed dead until a botched mission forced them to reveal their presence. Following the revelation of their existence, they are recruited by Checkmate to pursue missions which Checkmate cannot support publicly. Checkmate's assignment as part of the "CheckOut" crossover story arc involves dispatching the Outsiders to Oolong Island in China, the scene of World War III the previous year. The mission goes disastrously wrong when Chang Tzu captures Owen Mercer and Checkmate's Black Queen, until both sides are bailed out by Batman. In the aftermath, Nightwing decides to give Batman control of the team once more.
Fictional history:
Batman and the Outsiders (vol. 2) / Outsiders (vol. 4) (2007–2011) In November 2007, writer Chuck Dixon and artist Julian Lopez relaunched Outsiders (vol. 3) as Batman and the Outsiders (vol. 2), with the Dark Knight taking control of the team in the aftermath of the "CheckOut" crossover with Checkmate.
Outsiders: Five of a Kind In the weeks leading up to the new series' debut, Batman holds tryouts to determine who will be on the team in a series of one-shots called Five of a Kind. Each issue featured a different creative team (including Outsiders creator Mike W. Barr) and an epilogue written by Tony Bedard.
Fictional history:
Batman angers several members, who feel he has no right to remove people already on the team. Captain Boomerang leaves the team for Amanda Waller's Suicide Squad, and Nightwing decides to take no part in the Outsiders' questionable activities. Katana is chosen as the team's first official member, joined later by the Martian Manhunter, Metamorpho and Grace. Thunder is kicked off the team; the second Aquaman is rejected because Batman feels he does not match up to his predecessor, Orin. Batman then tells the other members: "Whether you like it or not, you're here to save the world. And you're going to be hated for it". After the team's first official mission (in Outsiders #50), Catwoman overheard the other recruits talking about the team being "down by law" and said: "Batman can't possibly start up his own crew of super-crooks without me in it!" Batman and the Outsiders (vol. 2) The team from Outsiders #50 was featured in the first two issues of Batman and the Outsiders (vol. 2). Afterward, Catwoman and the Martian Manhunter left the team and Batgirl, Geo-Force and the Green Arrow joined; Thunder consistently appeared in the series as well. In issue #5, Ralph "the Elongated Man" and Sue Dibny make a guest appearance. They are now "ghost detectives", and seem able to possess people in a method similar to that of Deadman. Dr. Francine Langstrom (wife of Dr. Kirk Langstrom, a.k.a. the Man-Bat) serves as the team's technical advisor, and her assistant Salah Miandad operates the "blank" OMAC drone known as ReMAC. In issue #9, Batman calls on former team member Looker to assist in an interrogation.
Fictional history:
The first main storyline of the title involves Batman sending the team to investigate the mysterious Mr. Jardine, who is organizing a convoluted plot to populate a subterranean lake on the Moon with alien lifeforms. While trying to stop Jardine's unauthorized space-shot in South America, Metamorpho is blasted into space and is forced to escape from the International Space Station (where seemingly-brainwashed astronauts from around the world are building a giant weapon). Seeking a shuttle to hijack, the rest of the team infiltrates a Chinese space facility (only to be captured by members of the Great Ten). The timely intervention of Batgirl and ReMAC saves the team from execution. Metamorpho steals a shuttle back to Earth, escapes from the European Space Agency and rejoins the team.
Fictional history:
During the Batman R.I.P. events, an assembly of the Outsiders (including Thunder) receives a message from the missing Batman. It asks them to feed a secret code into the cybernetic mind of ReMAC, allowing it to track the Caped Crusader and the Black Glove organization and help him in his fight. As they comply (against Batgirl's advice), the code reveals itself as a cybernetic booby-trap coming from Simon Hurt (the mastermind behind Batman's downfall) and ReMAC explodes. Several Outsiders are wounded, and Thunder suffers brain injuries severe enough to knock her into a seemingly-irreversible coma. However, her in-costume appearance in the Final Crisis: Submit story contradicts this; the events of that Final Crisis storyline occur after the events in Batman R.I.P., suggesting a continuity error. When Black Lightning rejoins the team after the events of Batman R.I.P. and Final Crisis, he is shown visiting Thunder (who is still hospitalized in a coma).
Fictional history:
Outsiders (vol. 4) As a result of Batman R.I.P. and Final Crisis (where Batman apparently died), the series was renamed Outsiders (vol. 4) and featured a new team roster. The change occurred when a new creative team took over, with Peter Tomasi writing and Lee Garbett on art duty. Tomasi began with Batman and the Outsiders Special (vol. 2) #1 and the retitled series began with issue #15.One night, after going to visit the graves of Thomas and Martha Wayne, Alfred awakens in Wayne Manor to a giant door opening in his room. He walks through it, where he sees a pod with a chair inside. He takes a seat, as a hologram of Batman activates. Batman explains that, because he has not entered a special code into the Bat-Computer (or any of its subsidiaries) for a certain length of time, this recording is playing (meaning he is probably dead). He tells Alfred of a very important mission the latter must undertake on his behalf (since Batman is unable to do so), but gives him a choice to accept or decline. Alfred promptly accepts; Batman explains what Alfred has meant to him throughout his life, saying to him what he did not have a chance to say at his death: "Goodbye, Dad." With this, Batman charges Alfred to assemble a new team of Outsiders. Alfred travels around the planet, recruiting Roy Raymond Jr., Black Lightning, Geo-Force (leader), Halo, Katana, the Creeper and Metamorpho. As a member of the team, each must become a true "outsider," living away from their families and the public eye for months at a time. Each member fills a role once filled by Batman, making this team a composite. This story arc ended with issue #25, and the series ended after 40 issues.
Fictional history:
Post–Final Crisis Dan DiDio and Phillip Tan began a new run of Outsiders in January 2010, in which Geo-Force appears to be acting more irrationally since his battle with Deathstroke. Without consulting the rest of the team (or Alfred), Geo-Force enters into a non-aggression pact with New Krypton (offering Markovia as a haven for all Kryptonians). The Eradicator is New Krypton's representative.
Fictional history:
Batman Inc. (2011–2013) In the 2011 Batman Inc. series by Grant Morrison, Batman assembles a new team of Outsiders which acts as a black-ops wing of Batman Incorporated. The team consists of Metamorpho, Katana, Looker, Halo and Freight Train, and is led by the Red Robin. This incarnation of the team proved short-lived, as all of its members (except the Red Robin) were caught in an explosion caused by Lord Death Man in the 2011 Batman Incorporated: Leviathan Strikes one-shot issue. The survivors were revealed in issue #1 of (vol. two) (2012). Metamorpho had kept everyone alive via his powers.
Fictional history:
In Green Arrow (vol. 5) (2013–2016) Beginning with Jeff Lemire's run of Green Arrow (vol. 5) in DC's The New 52 continuity, a new version of the 'Outsiders' was introduced. This is explained as being an ancient secret society dedicated to the elimination of corruption, but which itself has grown corrupt. Its membership is formed from the leaders of various clans centred around totemic weapons: the Mask, the Fist, the Arrow, the Axe, the Spear, the Shield, the Sword. A literal Green Arrow was the totemic weapon of the 'Arrow Clan', but this was destroyed by the Green Arrow as part of his symbolic rejection of the group. The Soultaker sword owned by Katana is the Sword Totem, making her the leader of the Sword Clan. The weapon totems supposedly grant immortality and enlightenment on the wielder, but the Green Arrow doubts such claims.
Fictional history:
The leader of the Arrow Clan was once Robert Queen, the Green Arrow's father. With his apparent death, it passed to Komodo (Simon Lacroix), an evil archer. It would later be passed to Shado, Robert Queen's former lover and another master archer. Katana heads the Sword Clan. An unkillable shapeshifter named Magus heads the Mask Clan. A physically intimidating man known as the Butcher leads the Axe Clan. Golgotha, leader of the Spear Clan, for a time led the Outsiders overall. Onyx leads the Fist Clan. The Shield Clan is led by Kodiak, who in addition to his mastery of the shield, wears a terrifying skull mask.
Fictional history:
DC Rebirth The original Outsiders are reintroduced in Dark Days: The Forge #1 (2017), a prelude to DC's Dark Nights: Metal crossover, in an expository scene which explains that Batman formed the Outsiders (Black Lightning, Metamorpho, Geo-Force, Katana, and Halo) to investigate a mystery concerning the DC Universe which connects the strangeness of the Multiverse to the amazing properties of metals—like Nth metal, the Court of Owls' resurrection metal, Aquaman's trident, and Doctor Fate's helmet— to metahumans and to mystical lands like Nanda Parbat, Skartaris, Atlantis, and Themiscyra, and much more. He assembled the team to operate outside the knowledge of the government, the Justice League, or the Batman family.In the Watchmen sequel Doomsday Clock, Geo-Force took advantage of the metahuman arms race in light of "the Superman Theory" and assembled Markovia's version of the Outsiders. The group consists of Baroness Bedlam, the Eradicator, Knightfall, Terra, and Wylde.The Detective Comics story arc On the Outside (July 2018) had Batman and Black Lightning come together to defeat a villain known as Karma. In the aftermath of the battle, Batman told Black Lightning that he wanted him to lead a new team of Outsiders consisting of himself, Cassandra Cain, Duke Thomas, and Katana, who had fought as their allies in the fight against Karma. An ongoing comic book featuring this team, titled Batman and the Outsiders (vol. 3), was set to release in December 2018. The series was abruptly cancelled before finally releasing in May the following year.
Fictional history:
Later, Black Lightning assembles a new "modular" iteration of the team with himself, Duke, Katana, and Metamorpho, plus a "rotating fifth chair" for other superheroes like Robin, Green Arrow, or Mister Miracle. In the set-up to the new series in Batman: Urban Legends, Batman formally asks to join the team as the fifth chair to help Duke track down the location of his mother.
Enemies:
The following are enemies of the Outsiders: Bad Samaritan - A master technician.
Baron Bedlam - A Markovian baron.
Doctor Moon - A mad scientist.
Duke of Oil - A cyborg who can control nearby nuclear devices.
The Force of July - A group of patriotic metahumans that was established by the A.S.A.Major Victory - William Vickers is the team leader. He has enhanced strength, flight and energy blasts due to his government designed power suit. Killed by Eclipso.
Abraham Lincoln Carlyle - Government liaison. Suffered a heart attack during a Suicide Squad attack.
B. Eric Blairman - Government liaison for the A.S.A. who had the Psycho Pirate's Medusa Mask.
Lady Liberty - Projects energy blasts from her torch and flight. Killed in an explosion aboard Kobra's satellite during the climax of the Janus Directive.
The Mayflower - Ability to control and grow plant life. Garotted by Ravan of the Suicide Squad.
The Silent Majority - Power of self duplication. All of his duplicates were killed in battle aboard Kobra's satellite when he attempts to destroy a device that would kill billions.
The Sparkler - Powers consisting of flight and the ability to project light as beams or even fireworks. Later slain by Doctor Light, who had severe mental problems concerning young superheroes.
Ishmael - A former experiment of the Ark Project that became a member of the League of Assassins.
Kobra - The leader of the Kobra organization.
The Masters of Disaster - A group of elemental metahumans.
The New Olympians - The New Olympians are Maxie Zeus' group of mercenaries selected to represent Greek and Roman gods to disrupt the 1984 Olympics.Antaeus I - Member of the New Olympians. He had powers similar to the actual Antaeus where he drew his strength from the ground. Antaeus was defeated in combat by Geo-Force.
Argus - Member of the New Olympians. He can telepathically see events unfold from great distances. Argus is also a poor fighter since he was easily defeated by Batman. His abilities make him similar to the actual Argus Panoptes.
Diana - Member of the New Olympians. She is a superb archer and swordswoman who also commands fierce dogs. Diana was defeated by Katana in a sword fight. Her talents make her similar to the actual Artemis.
Nox - Member of the New Olympians. She controls a mysterious dark force that enables her to fly and can manipulate it to take on different shapes. Nox was defeated in a gymnastics match against Halo. Her abilities make her similar to the actual Nyx.
Enemies:
Proteus - A shape-shifting member of the New Olympians. Besides shape-shifting, he can also elongate his limbs. Proteus first used his shape-shifting powers to make himself look handsome (since he disliked his previous appearance) and even grow bird-like wings. He and Vulcanus were defeated in a deadly soccer match against Black Lightning and Metamorpho. His abilities are similar to the actual Proteus.
Enemies:
Vulcanus - Member of the New Olympians. He wields a powerful hammer and can hurl high-temperature fireballs. Vulcanus and Proteus were defeated in a deadly soccer match against Black Lightning and Metamorpho. His abilities are similar to the actual Hephaestus.
The Nuclear Family - A group of androids whose appearances are modeled after their deceased creator and his deceased family.
Strike Force Kobra - A group of villains whose powers are similar to some of Batman's enemies. They were created by Kobra.
Syonide - A female assassin.
Tobias Whale - An African American albino crime lord.
Velocity - A clone of the Flash that was created by the Brotherhood of Evil and sold to a Malinese dictator named Ratu Bennin. While he possesses the Flash's speed, he does not possess his memories. Velocity was defeated by the Outsiders and taken into custody by Checkmate's White King Alan Scott.
Collected editions:
Batman and the Outsiders (vol. 1) Outsiders (vol. 3) Batman and the Outsiders (vol. 2) / Outsiders (vol. 4) Batman and the Outsiders (vol. 3)
Other versions:
In the JLA: The Nail miniseries, the Outsiders were formed by Black Canary to help Oliver Queen have his own team to focus on after becoming paralyzed and losing an arm after a disastrous battle with Amazo, but who quickly dismissed them feeling like a "sidekick." The team consists of Black Canary, Black Lightning, Geo-Force, Katana, Metamorpho, and Shade, the Changing Man.
Other versions:
In the Batman: Earth One series of graphic novels, the Outsiders appear in volume 3 as an alliance of Gotham crimefighters brought together by Batman. The team consists of Batman, Robin, Batgirl, the Cat, Killer Croc, and Ragman, with Alfred Pennyworth and Lucas Fox supporting them in a subway version of the Batcave.
In other media:
The Outsiders appear in Batman: The Brave and the Bold, initially consisting of teenage versions of Black Lightning, Katana, and Metamorpho. Introduced in the episode "Enter the Outsiders!", the crime lord Slug brainwashes the trio into serving him. With Batman and Wildcat's help, the Outsiders break free of Slug's control, defeat him, and begin training under Wildcat's tutelage. Later, in "Requiem for a Scarlet Speedster!", Geo-Force and Halo have joined the Outsiders and work with Batman to stop Kobra and his cultists.
In other media:
A team loosely based on the Outsiders appears in Beware the Batman, consisting of Batman, Katana, Metamorpho, Oracle, and Man-Bat. According to producer Glen Murakami, the planned second season would have added Cyborg and Red Arrow to the team while Oracle becomes Robin and Katana becomes Nightwing.
Two variations of the Outsiders appear in Young Justice: Outsiders.
In other media:
The first version is a loose, unnamed group of outcasts and exiles formed from the aftermath of a mission to shut down a metahuman trafficking ring. Consisting of Halo, Geo-Force, Forager, and Cyborg, they are brought together by Nightwing, Black Lightning, Superboy, and Tigress, who train them to become members of the Team, and covertly work for the Justice League. Halo and Forager later join the Team, while Geo-Force goes on to join the Outsiders (see below).
In other media:
In the episode "First Impression", Beast Boy forms the Outsiders with Geo-Force, Wonder Girl, Blue Beetle, Kid Flash, and Static to serve as a public version of the Team and operate independently of the Justice League while secretly answering to them. In "Early Warning", El Dorado joins the team to inspire metahuman teenagers and children at the Meta-Human Youth Center to gain confidence in their abilities. In "Into the Breach", Cyborg joins them to help them find Halo after she is kidnapped by Granny Goodness. In "Nevermore", Markovian ambassador and Light member Zviad Baazovi secretly manipulates Geo-Force into killing his uncle, Baron Frederick DeLamb, and overthrowing his brother Gregor Markov as king of Markovia. As a result, the Outsiders oust him from the group and are joined by his sister Terra, Superboy, and Forager. The Outsiders later merge with the Justice League, Team, and Batman Inc. to form one group under Black Lightning's leadership. As of Young Justice: Phantoms, Robin, Windfall, Stargirl, Looker, and Livewire have joined the Outsiders while Cyborg transferred to the League.
In other media:
The Outsiders appear in the Black Lightning in two forms.
In the season one episode "LaWanda: The Book of Burial", Grace Choi carries an Outsiders comic, which Anissa Pierce notices while conducting research in a bookstore.
In the season three episode "The Book of Markovia: Chapter Four: Grab the Strap", Black Lightning, Anissa, Choi, Lightning, Brandon / Geo, Painkiller, TC, and A.S.A agents Gardner Grayle and Erica Moran form a team loosely based on the Outsiders to battle Markovian forces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Queuing delay**
Queuing delay:
In telecommunication and computer engineering, the queuing delay or queueing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address.
Router processing:
This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay. As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. The router queue of packets waiting to be sent also introduces a potential cause of packet loss. Since the router has a finite amount of buffer memory to hold the queue, a router which receives packets at too high a rate may experience a full queue. In this case, the router has no other option than to simply discard excess packets.
Router processing:
When the transmission protocol uses the dropped-packets symptom of filled buffers to regulate its transmit rate, as the Internet's TCP does, bandwidth is fairly shared at near theoretical capacity with minimal network congestion delays. Absent this feedback mechanism the delays become both unpredictable and rise sharply, a symptom also seen as freeways approach capacity; metered onramps are the most effective solution there, just as TCP's self-regulation is the most effective solution when the traffic is packets instead of cars). This result is both hard to model mathematically and quite counterintuitive to people who lack experience with mathematics or real networks. Failing to drop packets, choosing instead to buffer an ever-increasing number of them, produces bufferbloat.
Notation:
In Kendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system. Kendall's notation should be used to calculate the queuing delay when packets are dropped from the queue. The M/M/1/K queuing model is the most basic and important queuing model for network analysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Substances poisonous to dogs**
Substances poisonous to dogs:
Some substances are poisonous to dogs through ingestion, contact, or inhalation. The poisons' effects can vary from mild illness to death. The poisonous substances most commonly consumed by pet dogs are human foods (including chocolate and grapes), medication not suitable for animals, household products, and plants.
Exposure and signs of poisoning:
Signs of poisoning in dogs differ based on the cause and the type of exposure. If a dog has consumed poison, the owner should immediately contact a vet. The three basic types of exposure a dog owner should be aware of are ingestion, contact, and inhalation.
Exposure and signs of poisoning:
Ingestion Poisons are usually ingested due to a curious dog eating or chewing on something that is poisonous or has some level of toxin in it. Unknowing dog owners also sometimes give their pets medications that they cannot break down and metabolize, thus becoming poisonous to the dog. Keeping toxic substances out of a dog's reach will avoid such scenarios. Medications and food intended for human consumption should not be given to any dog. Ingesting poisons may cause drooling, vomiting, diarrhea, irritability, tremors, twitching, seizures, rapid heartbeat, coma, or death. The velocity of these symptoms may differ on the poison or the quantity of poison consumption.
Exposure and signs of poisoning:
Contact Poisoning in dogs by contact happens when a dog gets the substance on their skin or coat. These can cause skin irritation and burns but also become ingested when the dog attempts to clean the substance off their skin/coat by licking. Some of the substances can be removed from the skin or coat, though the dog may still need veterinary attention.
Exposure and signs of poisoning:
Inhalation Inhaled toxins have found their way into a dog's respiratory system and often cause difficulty breathing. If a dog owner is aware of inhalation of poisonous substances, professional medical intervention is recommended, as these toxins can make their way to other organs within the body.
Substances:
Human food Many human foods cause serious problems when ingested in large amounts. In 2011, the consumption of toxic foods was the number one cause of poisoning in dogs. In 2017, the ASPCA Animal Poison Control Center received 199,000 poisoning cases, almost one-fifth of which were the result of ingesting human foods.
Substances:
Avocado Avocados are known for having high amounts of persin, a chemical toxic to many animals, including dogs. Persin is found in the leaves, bark, pulp, and skin of the avocado, making it harder for dogs to ingest too much. However, high amounts of persin can cause an upset stomach in dogs, and eating large amounts of persin over a longer period of time has been known to cause heart failure in dogs. Large amounts of avocado flesh at once can cause vomiting and an upset stomach, and its high-fat content can cause pancreatitis in dogs.
Substances:
Chocolate Chocolate is dangerous for dogs because they are unable to break down theobromine and caffeine, and chocolate contains both. Darker chocolate and baking chocolate contain a higher amount of theobromine, thus they are more dangerous than milk chocolate or white chocolate. Small amounts of chocolate may cause vomiting or diarrhea, but larger amounts may begin to affect the heart and brain. Large amounts of chocolate cause the dog to suffer irregular heart rhythms or heart failure.Chocolate-style dog treats can be made with carob, which is similar to chocolate but not toxic to dogs.
Substances:
Grapes/raisins/currants These include any fruit of the Vitis species. It is unclear what substance within these fruits is toxic to dogs. There are several theories: mycotoxin, salicylate, tartaric acid, or potassium bitartrate, are all naturally found in grapes and decrease blood flow to the kidneys. There may be no dose information to show how much is too much. One dog may tolerate grapes or raisins more than another.
Substances:
Macadamia nuts Macadamia nuts have been included in the top foods to avoid feeding dogs. Like grapes or raisins, it is unknown what is in the nut that causes negative reactions. Minuscule amounts of the nut can cause adverse reactions – "as little as 1/10th of an ounce per roughly 2 pounds of body weight." Macadamia nuts are singled out as having higher toxicity. Other nuts in general are high in fat and can cause a dog to become ill.
Substances:
Xylitol The FDA has issued alerts to notify the public that xylitol, a sugar substitute, is harmful to dogs. It is used in sugar-free foods including gum, candy, and oral hygiene products. Some peanut butter will also contain xylitol. Xylitol can cause liver failure and hypoglycemia because it stimulates rapid insulin production in the canine pancreas. Potential symptoms include loss of coordination, vomiting, or seizures. Xylitol is not always clearly labeled on sugar-free foods. Ingredient listings should indicate if xylitol is in the product. Food labels with the listing for "sugar alcohol" may contain xylitol. Other names for xylitol include birch sugar, E967, Meso-Xylitol, Xilitol, Xylit, 1, 2, 3, 4, 5-Pentol, and Sucre de Bouleau.
Substances:
Fruit pits and seeds Apples are safe for dogs, but apple seeds are not. Apple seeds, persimmon, peach, and plum pits, as well as other fruit seeds or pits have "cyanogenic glycosides". For example, if an apple seed skin is broken as a dog eats an apple, then cyanide could be released. Apple seeds should be removed before a dog eats the apple.
Substances:
Onions and garlic The Alliaceae family, of the Allium genus, or the onion family, includes onion, garlic, shallots, scallions, chives, and leeks. These contain N-propyl disulfide, Allyl propyl disulfide, and sodium N-propylthiosulfate which can cause red blood cell damage and anemia. Thiosulphate poisoning from onions can cause orange to dark-red tinged urine, vomiting, and diarrhea.
Medication Human vitamin supplements can damage the digestive tract lining, especially those containing iron, and can lead to kidney and liver damage.Ibuprofen and acetaminophen, commonly known as Motrin or Advil, and Tylenol, can cause liver damage in dogs.
Human antidepressant drugs like Celexa can cause neurological problems in dogs.
ADHD medications contain stimulants, such as methylphenidate, that if ingested even in small amounts can be life-threatening to dogs. Examples are Concerta, Vyvanse, Adderall, and Dexadrine.
Substances:
Household products Many cases of pet poisoning in the United States are caused by household products.Substances with a pH greater than 7 are considered alkalis. Usually, exposure causes some level of irritation. However, these substances generally have no taste or odor which increases the chance of larger amounts being ingested by a dog. At high levels of consumption, alkalis become a greater danger for dogs. Bleach, oven and drain/pipe cleaners, hair relaxers, and lye are examples of alkaline products.Ethylene glycol, antifreeze, is extremely toxic to dogs. It has a sweet taste and thus dogs will drink it. As little as 2 1/2 tablespoons can kill a medium-sized dog in 2–3 days. This type of poisoning is often fatal as dog owners do not know their pet has ingested the antifreeze. De-icing fluids can also contain ethylene glycol.
Substances:
Paraquat is used for weeding and grass control. It is so toxic that blue dye is added so it is not confused with coffee, a pungent odor is added as a warning, and a vomiting agent in case it is ingested. In the US, it can only be used by those with a commercial license for its use. It is one of the most commonly used herbicides worldwide. Outside of the US, the licensing requirements may not exist.
Substances:
Pesticides Pesticides containing organophosphates can be fatal to dogs. "Disulfoton is an example found in rose care products." "They're considered junior-strength nerve agents because they have the same mechanism of action as nerve gases like sarin", explained Dana Boyd Barr, an exposure scientist at Emory University in Atlanta, Georgia, who has studied organophosphate poisoning. Organophosphates are not banned from use but require licensing for use.
Substances:
Rodenticides Zinc phosphide is a common ingredient in rat poison or rodenticide. Zinc phosphide is a combination of phosphorus and zinc. If ingested, the acid in a dog's stomach turns the compound into phosphine, which is a toxic gas. The phosphine gas crosses into the dog's cells and causes the cell to die. Signs of poisoning include vomiting, anxiety, and loss of coordination. If a dog has not eaten and has an empty stomach when ingesting zinc phosphide, signs may not be apparent for up to 12 hours.Strychnine is another rodenticide that is dangerous and causes similar reactions to zinc phosphide exposure. If a dog survives 24–48 hours after this type of poisoning, they generally recover well.
Substances:
Veterinary products Rimadyl, Dermaxx, and Previcox are types of NSAIDs specifically for veterinary use for osteoarthritis, inflammation, and pain control in dogs. These can cause liver/kidney issues in dogs.
In most cases, issues of poisoning by veterinary products are due to incorrect administration by the veterinarian or the dog owner.
Substances:
Plants Daffodil Daffodils contain lycorine which can cause vomiting, drooling, diarrhea, stomachache, heart, and breathing issues. Any part of the plant may induce side effects, but the bulb is the most toxic. At higher amounts, the toxin can cause gastrointestinal issues or a drop in blood pressure. Tulip Any part of the tulip can be poisonous but the bulb is the most toxic causing irritation in the mouth and throat. Signs of this type of poisoning are drooling, vomiting, stomachache, and diarrhea.
Substances:
Azalea Azaleas contain grayanotoxins. This toxin passes through the dog's body quickly and symptoms of vomiting, diarrhea, stomach pain, weakness, or abnormal heart rate usually subside in a few hours.
Oleander Oleander contains cardiac glycosides oleandrin and nerioside, and when ingested can result in fatal heart abnormalities, muscle tremors, incoordination, vomiting, and bloody diarrhea. The signs can start within a few hours and cause a dog's condition to decline quickly, thus treatment is often not successful.
Substances:
Dieffenbachia Dieffenbachia causes oral irritation, vomiting, and difficulty swallowing in dogs. This plant contains calcium oxalate crystals. After ingestion, a dog may have a hard time swallowing and begin drooling or coughing as if choking. Dieffenbachia can cause damage to the liver and kidneys leading to death, comas, or permanent damage to critical organs, including the liver and kidneys, which may even lead to death.
Substances:
Sago palm Sago palms are toxic and potentially fatal to all pets, producing symptoms that include vomiting, diarrhea, seizures, and liver failure. The leaves and bark are both harmful, and the seeds (or "nuts") are even more toxic.
Cyclamen Possibly all species of cyclamen are toxic to dogs. Cyclamen contains triterpenoid saponins that irritate skin and are toxic to dogs.
Castor bean Castor beans or the castor oil plant contain ricin which is toxic to dogs. It can be fatal depending on how much of the plant is ingested. The beans of the plant have a higher concentration of ricin and if chewed instead of swallowed whole will cause increased toxicity levels.
Substances:
Hemlock The USDA lists water hemlock as “the most violently toxic plant that grows in North America”. Dog deaths due to hemlock poisoning are unusual, and most animal deaths are cows or other grazing animals. If a dog does ingest hemlock, the cicutoxin in the plant can be fatal very quickly as it causes the heart and nervous system to not be able to function normally.
Treatment:
There are many possible treatment paths for poisoning in dogs. One of the most important parts of any treatment is timing. A dog that has been exposed to a toxic substance has a better chance of recovery if treatment is initiated quickly. A veterinarian can determine if inducing vomiting is an appropriate action to remove a poisonous substance from a dog's stomach. Treatment for swelling may require an antihistamine or other inflammatory drug to reduce swelling. Dogs may be put under anesthesia for their stomach to be flushed or given an activated charcoal solution to prevent absorption in the stomach.
Treatment:
Blood tests will indicate enzyme levels related to liver, kidney, and bowel functions. Blood tests will also show levels of red and white blood cells and platelet levels. Just as in humans, there are established ranges for normal functions, and blood test results will indicate what may be malfunctioning in a dog's body.
In the case of poisons that cause liver damage, intravenous fluids assist in flushing toxins from the dog's body and may be combined with medications to help liver function.Treatment will be more effective if the type of poison is known. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Simple Bus Architecture**
Simple Bus Architecture:
The Simple Bus Architecture (SBA) is a form of computer architecture. It is made up software tools and intellectual property cores (IP Core) interconnected by buses using simple and clear rules, that allow the implementation of an embedded system (SoC). Basic templates are provided to accelerate design. The VHDL code that implements this architecture is portable.
Master core:
The master core is a finite state machine (FSM) and performs basic data flow and processing, similar to a microprocessor, but with lower consumption of logic resources.
Wishbone:
SBA is an application and a simplified version of the Wishbone specification. SBA implements the minimum essential subset of the Wishbone signals interface. It can be connected with simple Wishbone IP Cores. SBA defines three types of cores: masters, slaves, and auxiliaries. Several slave IP Cores were developed following the SBA architecture, many to implement virtual instruments. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parastatistics**
Parastatistics:
In quantum mechanics and statistical mechanics, parastatistics is one of several alternatives to the better known particle statistics models (Bose–Einstein statistics, Fermi–Dirac statistics and Maxwell–Boltzmann statistics). Other alternatives include anyonic statistics and braid statistics, both of these involving lower spacetime dimensions. Herbert S. Green is credited with the creation of parastatistics in 1953.
Formalism:
Consider the operator algebra of a system of N identical particles. This is a *-algebra. There is an SN group (symmetric group of order N) acting upon the operator algebra with the intended interpretation of permuting the N particles. Quantum mechanics requires focus on observables having a physical meaning, and the observables would have to be invariant under all possible permutations of the N particles. For example, in the case N = 2, R2 − R1 cannot be an observable because it changes sign if we switch the two particles, but the distance between the two particles : |R2 − R1| is a legitimate observable.
Formalism:
In other words, the observable algebra would have to be a *-subalgebra invariant under the action of SN (noting that this does not mean that every element of the operator algebra invariant under SN is an observable). This allows different superselection sectors, each parameterized by a Young diagram of SN.
In particular: For N identical parabosons of order p (where p is a positive integer), permissible Young diagrams are all those with p or fewer rows.
For N identical parafermions of order p, permissible Young diagrams are all those with p or fewer columns.
If p is 1, this reduces to Bose–Einstein and Fermi–Dirac statistics respectively.
If p is arbitrarily large (infinite), this reduces to Maxwell–Boltzmann statistics.
Trilinear relations:
There are creation and annihilation operators satisfying the trilinear commutation relations [ak,[al†,am]±]−=[ak,al†]∓am±al†[ak,am]∓±[ak,am]∓al†+am[ak,al†]∓=2δklam [ak,[al†,am†]±]−=[ak,al†]∓am†±al†[ak,am†]∓±[ak,am†]∓al†+am†[ak,al†]∓=2δklam†±2δkmal† [ak,[al,am]±]−=[ak,al]∓am±al[ak,am]∓±[ak,am]∓al+am[ak,al]∓=0
Quantum field theory:
A paraboson field of order p, {\textstyle \phi (x)=\sum _{i=1}^{p}\phi ^{(i)}(x)} where if x and y are spacelike-separated points, [ϕ(i)(x),ϕ(i)(y)]=0 and {ϕ(i)(x),ϕ(j)(y)}=0 if i≠j where [,] is the commutator and {,} is the anticommutator. Note that this disagrees with the spin-statistics theorem, which is for bosons and not parabosons. There might be a group such as the symmetric group Sp acting upon the φ(i)s. Observables would have to be operators which are invariant under the group in question. However, the existence of such a symmetry is not essential.
Quantum field theory:
A parafermion field {\textstyle \psi (x)=\sum _{i=1}^{p}\psi ^{(i)}(x)} of order p, where if x and y are spacelike-separated points, {ψ(i)(x),ψ(i)(y)}=0 and [ψ(i)(x),ψ(j)(y)]=0 if i≠j . The same comment about observables would apply together with the requirement that they have even grading under the grading where the ψs have odd grading.
The parafermionic and parabosonic algebras are generated by elements that obey the commutation and anticommutation relations. They generalize the usual fermionic algebra and the bosonic algebra of quantum mechanics. The Dirac algebra and the Duffin–Kemmer–Petiau algebra appear as special cases of the parafermionic algebra for order p = 1 and p = 2, respectively.
Quantum field theory:
Explanation Note that if x and y are spacelike-separated points, φ(x) and φ(y) neither commute nor anticommute unless p=1. The same comment applies to ψ(x) and ψ(y). So, if we have n spacelike separated points x1, ..., xn, ϕ(x1)⋯ϕ(xn)|Ω⟩ corresponds to creating n identical parabosons at x1,..., xn. Similarly, ψ(x1)⋯ψ(xn)|Ω⟩ corresponds to creating n identical parafermions. Because these fields neither commute nor anticommute ϕ(xπ(1))⋯ϕ(xπ(n))|Ω⟩ and ψ(xπ(1))⋯ψ(xπ(n))|Ω⟩ gives distinct states for each permutation π in Sn.
Quantum field theory:
We can define a permutation operator E(π) by E(π)[ϕ(x1)⋯ϕ(xn)|Ω⟩]=ϕ(xπ−1(1))⋯ϕ(xπ−1(n))|Ω⟩ and E(π)[ψ(x1)⋯ψ(xn)|Ω⟩]=ψ(xπ−1(1))⋯ψ(xπ−1(n))|Ω⟩ respectively. This can be shown to be well-defined as long as E(π) is only restricted to states spanned by the vectors given above (essentially the states with n identical particles). It is also unitary. Moreover, E is an operator-valued representation of the symmetric group Sn and as such, we can interpret it as the action of Sn upon the n-particle Hilbert space itself, turning it into a unitary representation.
Quantum field theory:
QCD can be reformulated using parastatistics with the quarks being parafermions of order 3 and the gluons being parabosons of order 8. Note this is different from the conventional approach where quarks always obey anticommutation relations and gluons commutation relations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bayley Scales of Infant Development**
Bayley Scales of Infant Development:
The Bayley Scales of Infant and Toddler Development (version 4 was released September 2019) is a standard series of measurements originally developed by psychologist Nancy Bayley used primarily to assess the development of infants and toddlers, ages 1–42 months. This measure consists of a series of developmental play tasks and takes between 45 – 60 minutes to administer and derives a developmental quotient (DQ) rather than an intelligence quotient (IQ). Raw scores of successfully completed items are converted to scale scores and to composite scores. These scores are used to determine the child's performance compared with norms taken from typically developing children of their age (in months). The Bayley-III has three main subtests; the Cognitive Scale, which includes items such as attention to familiar and unfamiliar objects, looking for a fallen object, and pretend play, the Language Scale, which taps understanding and expression of language, for example, recognition of objects and people, following directions, and naming objects and pictures, and the Motor Scale, which assesses gross and fine motor skills such as grasping, sitting, stacking blocks, and climbing stairs. There are two additional Bayley-II Scales depend on parental report, including the Social-Emotional scale, which asks caregivers about such behaviors as ease of calming, social responsiveness, and imitation play, and the Adaptive Behavior scale which asks about adaptions to the demands of daily life, including communication, self-control, following rules, and getting along with others. The Bayley-III Cognitive and Language scales are good predictors of preschool mental test performance. These scores are largely used for screening, helping to identify the need for further observation and intervention, as infants who score very low are at risk for future developmental problems.
Development:
Prior to the first official scale by Nancy Bayley, research was conducted to determine which important variables should be included in a cumulative developmental test for infants. In 1965, Nancy Bayley conducted an experiment examining mental and motor test scores for infants aged 1 to 15 months, comparing sex, birth order, race, geographical location, and parental education. No differences in scores were found for either scale between boys and girls, first-born and later-born, education of either father or mother, or geographic residence. No differences were found between African Americans and Caucasians on the Mental Scale, but the African American babies tended consistently to score above the Caucasians on the Motor Scale. These findings emphasised the need to study in careful detail the development of mental processes in the second year of life. Within this period evidently will be found the explanation of the socioeconomic and ethnic differences in mental functioning that are repeatedly found for children of 4 years and older. Following the need for further investigation, Nancy Bayley conducted a related experiment in which the reliability of her revised scale of mental and motor development during the first year of life was tested, which yielded the following results: (1) Mental Scale items with high tester-observer and high test-retest reliabilities deal with object-oriented behavior; (2) Mental Scale items with low test-retest reliabilities require social interaction; (3) Motor Scale items with high tester-observer and high test-retest reliabilities deal with independent control of head, trunk, and extremities; (4) Motor Scale items with low test-retest reliability require assistance by an adult. These findings implicated early diagnosis of neural malfunctioning. Likewise, Nancy Bayley also conducted a test on infant vocalizations and their relationships to mature intelligence beginning in 1967, in which participants were monitored over longitudinal studies, which followed infants’ use of vocalizing displeasures and satisfaction, and correlating them with language skills of the same individual over childhood and adolescence, into early adulthood. The results indicate that vocalizations did significantly correlate with girls’ later intelligence, increasingly so with age, and more highly with verbal then performance scores.
First Edition (1969–1993):
In 1983, 25 low-risk mother-infant pairs participated in a research project to predict the performance of 21 month olds on the Mental Scale of the Bayley Scale of Infant Development (BSID-1) from characteristics of infants and mothers. Questionnaires given assessed maternal responsive attitude during the prenatal period, the Neonatal Behavioral Assessment Scale was administered at 5 and 10 days, and mothers and infants were observed together at 3 months. Babies were then tested on the BSID-1 when they were 21 months of age. Mothers level of education, a responsive maternal attitude and 3-month smiling and eye contact was found to predict infant performance on the Mental Scale of the BSID-1, lending support to its validity.
Second Edition (1993–2006):
Application While applying the Bayley Scales of Infant Development (BSID-II), it was found that scales may lead to under-estimates of cognitive abilities in infants with Down syndrome. Researchers excluded a number of items that implicated language, motor, attentional and social functioning from the original measures the modified form was administered to 17 infants with Down syndrome and to 41 typically developing infants. Results suggested the modified version provided a meaningful and stable measure of cognitive functioning in infants with Down syndrome.
Second Edition (1993–2006):
Validity Researchers assessed the predictive validity of the BSID-II Mental Development Index (MDI) for cognitive function at school age for infants born with extremely low birth weight (ELBW). Data was studied from the BSID-II tests of 344 ELBW infants admitted to the neonatal intensive care unit at the Rainbow Infants and Children's Hospital in Cleveland, OH from 1992–1995. It was found that the predictive validity of a subnormal MDI for cognitive function at school age is poor but better for ELBW children who have neurosensory impairments. This brought on concern that decisions to provide intensive care for ELBW infants in the delivery room might be biased because of reported high rates of cognitive impairments.
Third Edition (2006–2019):
Improvements The Bayley Scales of Infant and Toddler Development–Third Edition (Bayley-III) is a revision of the frequently used and well-known Bayley Scales of Infant Development–Second Edition (BSID-II; Bayley, 1993). Like its prior editions, the Bayley-III is an individually administered instrument designed to measure the developmental functioning of infants and toddlers. Other specific purposes of the Bayley-III are to identify possible developmental delay, inform professionals about specific areas of strength or weakness when planning a comprehensive intervention, and provide a method of monitoring a child's developmental progress. The most significant revision to the Bayley-III is the development of five distinct scales (as compared to three scales in the BSID-II) to be consistent with areas of appropriate developmental assessment for children from birth to age 3. Whereas the BSID-II provided Mental, Motor, and Behavior scales, the Bayley-III revision includes Cognitive, Language, Motor, Social-Emotional, and Adaptive Behavior scales. Considering that the primary intent of the Bayley-III is to identify children experiencing developmental delay and not to specifically diagnose a disorder, the floor and ceiling of the subtest and total test appear to be adequate. As would be expected from an adaptive behavior measure (i.e., ABAS-II) that was developed independently of the Bayley-III, the floor for the Adaptive Behavior scale extends downward to a composite score of 40 (extending upwards to a score of 160), whereas the remaining Bayley-III floor composite scores are relatively higher (Cognitive, 55–145; Language, 47–153; Motor, 46–154; Social-Emotional, 55–145). One area that was not improved, however, are the subtest floor scores for the youngest children in the sample (i.e., those aged 16 to 25 days). Likewise, when a 2011 study was conducted comparing the relationship between test scores using the second and third editions of the Bayley Scales in extremely preterm children, it was concluded that interpreting these scores should be done with caution as the correlation with the previous edition appears worse at lower test score values.Bayley-4 has been announced and will be available September 2019.
Third Edition (2006–2019):
Application The relationship between abnormal feeding patterns and language patterns and language performance on the BSID-III at 18–22 months among extremely premature infants was evaluated. 1477 preterm infants born at <26 weeks gestation completed an 18-month neurodevelopmental follow-up assessment including the Receptive and Expressive Language Subscales of the BSID-III. Abnormal feeding behaviors were reported in 193 (13%) of these infants at 18–22 months. It was determined with the help of the BSID-III that at 18 months adjusted age, premature infants with a history of feeding difficulties are more likely to have a language delay.Another more recent study focused on the how the application of the BSID-III was useful in recommending treatments for infants in a Neo-natal Intensive Care Unit follow-up clinic. It assessed if the BSID-III was predictive of a referral for further developmental therapy. Independent sample t-tests were conducted to compare motor performance to recommendations for motor therapy found there was a significant difference in the gross motor scores for those who were and were not recommended for motor therapy. Findings indicated that the factors that influence follow-up recommendations are complex and the test scores alone were not indicative of whether or not a referral was given. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ATF4**
ATF4:
Activating transcription factor 4 (tax-responsive enhancer element B67), also known as ATF4, is a protein that in humans is encoded by the ATF4 gene.
Function:
This gene encodes a transcription factor that was originally identified as a widely expressed mammalian DNA binding protein that could bind a tax-responsive enhancer element in the LTR of HTLV-1. The encoded protein was also isolated and characterized as the cAMP-response element binding protein 2 (CREB-2). ATF4 is not a functional transcription factor by itself but one-half of many possible heterodimeric transcription factors. Because ATF4 can simultaneously participate in multiple distinct heterodimers, the overall set of genes that require ATF4 for maximal expression in a specific context (ATF4-dependent genes) can be a mixture of genes that are regulated by different ATF4 heterodimers, with some ATF4-dependent genes activated by one ATF4 heterodimer and other ATF4-dependent genes activated by other ATF4 heterodimers.The protein encoded by this gene belongs to a family of DNA-binding proteins that includes the AP-1 family of transcription factors, cAMP-response element binding proteins (CREBs) and CREB-like proteins. These transcription factors share a leucine zipper region that is involved in protein–protein interactions, located C-terminal to a stretch of basic amino acids that functions as a DNA-binding domain. Two alternative transcripts encoding the same protein have been described. Two pseudogenes are located on the X chromosome at q28 in a region containing a large inverted duplication.ATF4 transcription factor is also known to play role in osteoblast differentiation along with RUNX2 and osterix. Terminal osteoblast differentiation, represented by matrix mineralization, is significantly inhibited by the inactivation of JNK. JNK inactivation downregulates expression of ATF-4 and, subsequently, matrix mineralization. IMPACT protein regulates ATF4 in C. elegans to promote lifespan.ATF4 is also involved in the cannabinoid Δ9-tetrahydrocannabinol–induced apoptosis in cancer cells, by the proapoptotic role of the stress protein p8 via its upregulation of the endoplasmic reticulum stress-related genes ATF4, CHOP, and TRB3.
Translation:
The translation of ATF4 is dependent on upstream open reading frames located in the 5'UTR. The location of the second uORF, aptly named uORF2, overlaps with the ATF4 open-reading frame. During normal conditions, the uORF1 is translated, and then translation of uORF2 occurs only after eIF2-TC has been reacquired. Translation of the uORF2 requires that the ribosomes pass by the ATF4 ORF, whose start codon is located within uORF2. This leads to its repression. However, during stress conditions, the 40S ribosome will bypass uORF2 because of a decrease in concentration of eIF2-TC, which means the ribosome does not acquire one in time to translate uORF2. Instead ATF4 is translated. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Electroviscous effects**
Electroviscous effects:
Electroviscous effects, in chemistry of colloids and surface chemistry, according to an IUPAC definition, are the effects of the particle surface charge on viscosity of a fluid.
Electroviscous effects:
Viscoelectric is an effect by which an electric field near a charged interface influences the structure of the surrounding fluid and affects the viscosity of the fluid.Kinematic viscosity of a fluid, η, can be expressed as a function of electric potential gradient (electric field), E → {\textstyle {\vec {E}}} , by an equation in the form: where f is the viscoelectric coefficient of the fluid.
Electroviscous effects:
The value of f for water (ambient temperature) has been estimated to be (0.5–1.0) × 10−15 V−2 m2. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feeder link**
Feeder link:
A feeder link is – according to Article 1.115 of the International Telecommunication Union´s (ITU) ITU Radio Regulations (RR) – defined as: A radio link from an earth station at a given location to a space station, or vice versa, conveying information for a space radiocommunication service other than for the fixed-satellite service. The given location may be at a specified fixed point, or at any fixed point within specified areas.
Feeder link:
Each station shall be classified by the service in which it operates permanently or temporarily. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Honor 8x**
Honor 8x:
The Honor 8x is a smartphone made by Huawei under their Honor sub-brand. It is a successor of the Huawei Honor 7x within the Huawei Honor series.
Specifications:
Display and Camera The Honor 8X has a 6.5-inch FHD+ display with a screen resolution of 1,080 × 2,340 pixels, has a pixel density of 396 PPI, and has a 19.5:9 aspect ratio. The phone has 20 megapixel dual rear camera, a rear 2 megapixel LDAP depth sensor, and one front 16MP front camera.One feature that is advertised is its "AI Photography", which attempts to improve images by adjusting colours and focus based on the type of scene.
Specifications:
Configuration and specifications The base model Honor 8X has 64GB of internal storage, and 4GB of RAM. It can be purchased with up to 128GB of internal storage and 6GB of RAM. The internal storage can be expanded with a microSD card.This device is powered by Huawei's HiSilicon Kirin 710 chipset with two quad-core Cortex A53 processors. The GPU is a Mali-T830 MP2.It has a rear-mounted fingerprint scanner.The Honor 8X uses a 3,750 mAh lithium polymer battery.
Specifications:
Connectivity The phone supports 4G connectivity, and has two nano-sim slots. It supports Wi-Fi 802.11, Bluetooth 4.2, GPS, NFC, and has an FM radio receiver.Charging is done via a micro-USB port. A bottom 3.5 mm audio jack supports wired headphones.
Software The Honor 8x launched with Android (Oreo) and Huawei's EMUI 8.0, and can be upgraded to harmonyOS 3.0 with EMUI 10.0 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Design load**
Design load:
In a general sense, the design load is the maximum amount of something a system is designed to handle or the maximum amount of something that the system can produce, which are very different meanings. For example, a crane with a design load of 20 tons is designed to be able to lift loads that weigh 20 tons or less. However, when a failure could be catastrophic, such as a crane dropping its load or collapsing entirely, a factor of safety is necessary. As a result, the crane should lift about 2 to 5 tons at the most. In structural design, a design load is greater than the load which the system is expected to support. This is because engineers incorporate a safety factor in their design, in order to ensure that the system will be able to support at least the expected loads (called specified loads, despite any problems with construction, materials, etc. that go unnoticed during construction.
Design load:
A heater would have a general design load, meaning the maximum amount of heat it can produce. A bridge would have a specified load, with the design load being determined by engineers and applied as a theoretical load intended to ensure the actual real-world capacity of the specified load. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Search Party**
Search Party:
Search party is a general term referring to a group of people organized to look for someone or something that is lost. It may also refer to: Search Party (album), by ¡Mayday! (2017) Search Party (film), a 2014 American comedy film Search Party (TV series), a 2016 American dark comedy series | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**American Studies in Papyrology**
American Studies in Papyrology:
American Studies in Papyrology is a book series established in 1966 by the American Society of Papyrologists. The series editors are James Keenan (editor-in-chief), Kathleen McNamee, and Arthur Verhoogt.
Volumes:
1960s Essays in Honor of C. Bradford Welles, ed. A.E. Samuel. 1966.
Yale Papyri in the Beinecke Rare Book and Manuscript Library I, ed. John F. Oates, A.E. Samuel and C.B. Welles. New Haven and Toronto 1967. (= P. Yale 1) Inventory of Compulsory Services in Ptolemaic and Roman Egypt, by N. Lewis. 1968.
The Taxes in Grain in Ptolemaic Egypt: Granary Receipts from Diospolis Magna, 164-88 B.C., by Z.M. Packman. 1968.
Euripides Papyri I, Texts from Oxyrhynchus, by B.E. Donovan. 1969.
1970s Documentary Papyri from the Michigan Collection, ed. Gerald Michael Browne. Toronto 1970. (= P.Mich. X.) Proceedings of the Twelfth International Congress of Papyrology, Ann Arbor, Michigan, 12–17 August 1968, ed. D.H. Samuel. 1970.
The Ptolemaic and Roman Idios Logos, by P.R. Swarney. 1970.
Papyri from the Michigan Collection, ed. J.C. Shelton. Toronto 1971. (= P.Mich. XI.) Death and Taxes: Ostraka in the Royal Ontario Museum I, ed. A.E. Samuel, W.K. Hastings, A.K. Bowman, R.S. Bagnall. Toronto 1971. (= O. Ont. Mus. I) The Town Councils of Roman Egypt, by A.K. Bowman. 1971.
The Four Greek Hymns of Isidorus and the Cult of Isis, by V.F. Vanderlip. 1972.
Greek Terms for Roman Institutions: A Lexicon and Analysis, by H.J. Mason. 1974.
Michigan Papyri XII, ed. G.M. Browne. Toronto 1975. (= P.Mich. XII.) Ostraka in the Royal Ontario Museum II, ed. R.S. Bagnall and A.E. Samuel. Toronto 1976. (= O. Ont. Mus. II) Chester Beatty Biblical Papyri IV and V, by A. Pietersma. 1977.
Washington University Papyri I, ed. V.B. Schuman. Missoula 1980. (= P.Wash.Univ. I) Imperial Estates in Egypt, by G.M. Parassoglou. Las Palmas 1978.
Status Declarations in Roman Egypt, by C.A. Nelson. Las Palmas 1978.
Fourth Century Documents from Karanis, ed. R.S. Bagnall and N. Lewis. Missoula 1979. (= P.Col. VII.) Le Nome Hermopolite: toponymes et sites, by Marie Drew-Bear. Missoula 1979.
1980s Michigan Papyri XIV, ed. V.P. McCarren. Chico 1980. (= P.Mich. XIV.) Proceedings of the Sixteenth International Congress of Papyrology, ed. R.S. Bagnall, G.M. Browne, A.E. Hanson and L. Koenen. Chico 1981.
Yale Papyri in the Beinecke Rare Book and Manuscript Library II, ed. S.A. Stephens. Chico 1985.
Register of Oxyrhynchites, 30 B.C.-A.D. 96, by B.W. Jones and J.E.G. Whitehorne. 1983.
Saite and Persian Demotic Cattle Documents, A Study in Legal Forms and Principles in Ancient Egypt, by E. Cruz-Uribe. 1985.
Grundlagen des koptischen Satzbaus, by H.J. Polotsky 1987.
1990s Columbia Papyri VIII, ed. R.S. Bagnall, T.T. Renner and K.A. Worp. Atlanta 1990. (= P.Col. VIII.) Grundlagen des koptischen Satzbaus, zweite Halfte, by H.J. Polotsky. 1990.
Michigan Papyri XVI, A Greek Love Charm from Egypt (P.Mich. 757), ed. and comm. by David G. Martinez. Atlanta 1991. (= P.Mich. XVI.) Ptocheia or Odysseus in Disguise at Troy (P.Koln 245), ed. and comm. by M.G. Parca. 1991.
Un Codex fiscal Hermopolite (P.Sorb. II 69), ed. J. Gascou. Atlanta 1994. (= P. Sorb. II) On Government and Law in Roman Egypt, by Naphtali Lewis, ed. A.E. Hanson. Atlanta 1995.
Columbia Papyri X, ed. by R.S. Bagnall and D. Obbink. Atlanta 1996.
The Michigan Medical Codex (P. Mich. XVII 753), by Louise C. Youtie. Atlanta 1996.
Writing, Teachers and Students in Graeco-Roman Egypt, by Raffaella Cribiore. Atlanta 1996.
The Herakleopolite Nome: A Catalogue of the Toponyms with Introduction and Commentary by Maria Rosaria Falivene. 1998.
Columbia Papyri XI by Timothy M. Teeter. 1998.
Columbia Papyri IX: The Vestis Miltaris Codex by Jennifer Sheridan. 1999.
Volumes:
2000s Papyri in Memory of P. J. Sijpesteijn edited by A. J. B. Sirks and K. A. Worp, asst. editors R.S. Bagnall and R.P. Salomons. ISBN 978-0-9700591-0-9 A Yale Papyrus (PYale III 137) in the Beinecke Rare Book and Manuscript Library III by Paul Schubert. 2001. ISBN 0-9700591-1-6 Essays and Texts in Honor of J. David Thomas. Ed. by Traianos Gagos and Roger S. Bagnall. 2001. ISBN 0-9700591-3-2 It is our Father who writes: Orders from the Monastery of Apollo at Bawit by Sarah J Clackson. ISBN 978-0-9700591-5-4 Greek Documentary Papyri from Egypt in the Berlin Aegyptisches Museum (P.Berl.Cohen) by Nahum Cohen. 2007; ISBN 978-0-9700591-6-1 Annotations in Greek and Latin Texts from Egypt by K McNamee. ISBN 978-0-9700591-7-8 Papyri and Essays in memory of Sarah Clackson, ed. Boudhors et al. ISBN 978-0-9700591-8-5.
Volumes:
In Pursuit of Invisibility: Ritual Texts from Late Roman Egypt by Richard Phillips. ISBN 978-0-9700591-9-2.
2010s To Mega Biblion: Book-Ends, End-Titles, and Coronides in Papyri with Hexametric Poetry by Francesca Schironi. ISBN 978-0-9799758-0-6.
A Transportation Archive from Fourth-Century Oxyrhynchus (P. Mich. XX0), ed. P. J. Sijpesteijn and Klaas A. Worp with the assistance of Traianos Gagos and Arthury Verhoogt. ISBN 978-0-9799758-3-7. 2011.
Propsopography of Byzantine Aphrodito, by Giovanni Roberto Ruffini. ISBN 978-0-9799758-2-0. 2011.
Sixth-century Tax Register from the Hermopolite Nome, by Roger S. Bagnall, James G. Keenan, Leslie S. B. MacCoull. ISBN 978-0-9799758-4-4. 2011.
New Epigrams of Palladas: A Fragmentary Papyrus Codex (P.CtYBR inv. 4000) by Kevin Wilkinson. 2013.
Papyrological Texts in Honor of Roger S. Bagnall, ed. Rodney Ast, Hélène Cuvigny, Todd Hickey, Julia Lougovaya. ISBN 9780979975868. 2013. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mir-277 microRNA precursor family**
Mir-277 microRNA precursor family:
In molecular biology mir-277 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sweet Dreams (Dennett book)**
Sweet Dreams (Dennett book):
Sweet Dreams: Philosophical Obstacles to a Science of Consciousness is a 2005 book by the American philosopher Daniel Dennett, based on the text of the Jean Nicod lectures he gave in 2001.
Zombies:
Dennett extends his well noted attack on the philosophical notion of qualia by using the metaphor of philosophical zombies as well as addressing many popular thought experiments. Dennett's conclusion is that there are no qualia and that the mind, and consciousness, can be understood and explained from the Naturalist school of thought.
Fame in the Brain:
Dennett reposes the question of consciousness addressed in his 1991 book Consciousness Explained. In Consciousness Explained, Dennett established what he called the "multiple drafts model" of consciousness, which suggested that there was no singular space in the conscious mind. In other words, there is no special location in the brain that can be seen as the qualia-containing "consciousness module". Instead, he states that consciousness is smeared throughout the brain. He extends the model by creating a similar figure that he calls "Fame in the Brain" and suggests that the mind acts, to some degree, as an echo chamber, as well as the "bundle of semi-independent agencies" that he suggested in Consciousness Explained.
Fame in the Brain:
The main tenet of "Fame in the Brain" is that consciousness, much like fame, is not the cause, but the aftermath of certain brain processes. Dennett asks us to imagine an author whose book has yet to be released, but will result in unimaginable fame when it does. On Tuesday, when the book is to come out, he is scheduled to go on The Oprah Winfrey Show, to be interviewed on the BBC, and likely be nominated for several awards. However, on Monday, an earthquake destroys the entire city of San Francisco. Naturally, all the media hype that would have revolved around this author is drowned in the focus on San Francisco. Dennett asks, can this man be considered "famous"? He says that the man is in fact not famous even though the book that would have made him famous remains unchanged. This is because fame, according to Dennett, is not about the cause of the fame, but about the aftermath: the interviews, the magazine covers, the paparazzi, etc. Consciousness is the same way. In order for something to be considered "conscious", there must be enough correlating neural events that go with it (e.g. memory formation). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Duck and cover**
Duck and cover:
"Duck and cover" is a method of personal protection against the effects of a nuclear explosion. Ducking and covering is useful in offering a degree of protection to personnel located outside the radius of the nuclear fireball but still within sufficient range of the nuclear explosion that standing upright and uncovered is likely to cause serious injury or death. In the most literal interpretation, the focus of the maneuver is primarily on protective actions one can take during the first few crucial seconds-to-minutes after the event, while the film of the same name and a full encompassing of the advice also cater to providing protection up to weeks after the event.
Duck and cover:
The countermeasure is intended as an alternative to the more effective target/citywide emergency evacuation when these crisis relocation programs would not be possible due to travel and time constraints. Maneuvers similar, but not identical, to Duck and Cover are also taught as the response to other sudden destructive events, such as an earthquake or tornado, in the comparable situation where preventive emergency evacuation is similarly not an option, again, due to time constraints. In these analogously powerful events, Drop, Cover and Hold on likewise prevents injury or death if no other safety measures are taken.
Duck and cover:
As a countermeasure to the lethal effects of nuclear explosions, Duck and Cover is effective in both the event of a surprise nuclear attack, and during a nuclear attack of which the public has received some warning, which would likely be about a few minutes prior to the nuclear weapon arriving.
Procedure:
During a surprise nuclear attack Dropping immediately and covering exposed skin provide[s] protection against blast and thermal effects ... Immediately drop facedown. A log, a large rock, or any depression in the earth's surface provides some protection. Close eyes. Protect exposed skin from heat by putting hands and arms under or near the body and keeping the helmet on. Remain facedown until the blast wave passes and debris stops falling. Stay calm, check for injury, check weapons and equipment damage, and prepare to continue the mission.
Procedure:
Immediately after one sees the first flash of intense heat and light of the developing nuclear fireball, one should stop, get under some cover and drop/duck to the ground. There, one should assume a prone-like position, lying face-down, and to afford protection against the continuing heat of the explosion further cover exposed skin and the back of one's head with one's clothes; or, if no excess cover or cloth is available, one should cover the back of one's head and neck with one's hands.
Procedure:
Similar instructions, as presented in the Duck and Cover film, are contained in the British 1964 public information film Civil Defence Information Bulletin No. 5 and in the 1980s Protect and Survive public information series. Children in the Soviet Union likewise received almost identical classes on countermeasures, according to Inside the Kremlin's Cold War authors Zubok and Pleshakov.In U.S. Army training, soldiers are taught to fall down immediately and cover their face and hands in much the same way as is described above.In the classroom scene of the film, the rapid employment of school desks, as an improvised shelter following the awareness of the initial light flash, is a countermeasure primarily to offer protection from potential ballistic window glass lacerations when the slower moving blast wave arrived. However, in higher blast pressure zones, where partial-to-total building collapse may occur, it would also serve a similar role to that borne out from experience in urban search and rescue, where voids under the debris of collapsed buildings are common places for survivors to be found. More rigid examples of void-forming-tables to shelter under include the "Morrison indoor shelter", which was widely distributed by the millions in Britain as a protective measure against building collapse, brought about by blast pressures generated during the conventional bombing of cities in World War II.
Procedure:
When warning is given Under the conditions where some warning is given, one is advised to find the nearest bomb shelter, or if one could not be found, any well-built building to stay and shelter in place. Sheltering is, as depicted in the film, also the final phase of the "duck and cover" countermeasure in the surprise attack scenario.
Procedure:
Cursory analysis The "duck and cover" countermeasure could save thousands. This is because people, being naturally inquisitive, would instead run to windows to try to locate the source of the immensely bright flash generated at the instant of the explosion. During this time, unbeknownst to them, the slower moving blast wave, would be rapidly advancing toward their position, only to arrive and cause the window glass to implode, shredding onlookers. In the testimony of Dr. Hiroshi Sawachika, although he was sufficiently far away from the Hiroshima bomb himself and was not behind a pane of window glass when the blast wave arrived, those in his company who were had serious blast injury wounds, with broken glass and pieces of wood stuck into them.
Procedure:
During earthquakes and tornadoes Similar advice to "duck and cover" is given in many situations where structural destabilization or flying debris may be expected, such as during an earthquake or tornado. At a sufficient distance from a nuclear explosion, the blast wave produces similar results to these natural phenomena, so similar countermeasures are taken. In areas where earthquakes are common, a countermeasure known as "Drop, Cover, and Hold On!" is practiced. Likewise, in tornado-prone areas of the United States, especially those within Tornado Alley, tornado drills involve teaching children to move closer to the floor and to cover the backs of their heads to prevent injury from flying debris. Some US states also practice annual emergency tornado drills.
History:
The dangers of viewing explosions behind window glass were known of before the Atomic Age began, being a common source of injury and death from large chemical explosions. The Halifax Explosion of 1917, an ammunition ship exploding with the energy of roughly 2.9 kilotons of TNT, injured the eyes and faces of hundreds of people who stayed behind and looked out of their windows after seeing a bright flash, with 200 blinded by broken glass when the slower moving blast arrived. Every window in the city of Halifax, Nova Scotia, was shattered in this catastrophe of human error.In the Record of the "Nagasaki A-bomb War Disaster", those close to the hypocenter (Matsuyama township), were described as all having been killed, with the exception of "a child who was in an air-raid shelter." A little further away, Professor Seiki of Nagasaki Medical School Hospital was building an air-raid dugout 400 m from the hypocenter of the detonation and survived. Chimoto-san, who was atop a distant hill that creates the valley in which Nagasaki is located, performed the similar "hit the deck" maneuver upon seeing the bomb drop, which was notably prior to the detonation. However despite having these few seconds of relatively unique warning, he did not stay on the ground for long enough after the flash subsided, and received some translational injuries due to prematurely standing-up again, at which point the slower moving blast wave swept past him and carried him with it for a few meters.According to the 1946 book Hiroshima and other books which cover both bombings, in the days between the atomic bombings of Hiroshima and Nagasaki, some survivors of the first bombing went to Nagasaki and taught others about ducking after the atomic flash and informed them about the particularly dangerous threat of imploding window glass. As a result of this and other factors, far fewer died in the initial blast at Nagasaki as compared to those who were not taught to duck and cover. The general population however was not warned of the heat or blast danger following an atomic flash, due to the new and unknown nature of the atomic bomb. Many people in Hiroshima and Nagasaki died while searching the skies, curious to locate the source of the brilliant flash.When people are indoors, running to windows to investigate the source of bright flashes in the sky still remains a common and natural response to experiencing a bright flash. Thus, although the advice to duck and cover is over half a century old, ballistic glass lacerations caused the majority of the 1000 human injuries following the Chelyabinsk meteor air burst of February 15, 2013.
History:
This response was also observed among people in the vicinity of Hiroshima and Nagasaki.
Background:
The United States' monopoly on nuclear weapons was broken by the Soviet Union in 1949 when it tested its first nuclear explosive, the RDS-1. With this, many in the US Government, as well as many citizens, perceived that the United States was more vulnerable than it had ever been before. In 1950, during the first big Civil Defense push of the Cold War—and coinciding with the Alert America! initiative to educate Americans on nuclear preparedness, the adult-orientated Survival Under Atomic Attack was published. It contains "duck and cover" or more accurately, cover and then duck advice without using those specific terms in its Six Survival Secrets For Atomic Attacks section. 1. Try to Get Shielded 2. Drop Flat on Ground or Floor 3. Bury Your Face in Your Arms ("crook of your elbow"). The child-oriented film Duck and Cover was produced a year later by the Federal Civil Defense Administration in 1951.
Background:
"Duck and cover" exercises quickly became a part of Civil Defense drills that every US citizen, from children to the elderly, was encouraged to practice so that they could be ready in the event of nuclear war.
Background:
Education efforts on the effects of nuclear weapons proceeded with stops-and-starts in the US due to competing alternatives. In a once classified, 1950s era, US war game that looked at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time however the cost of a full-scale civil defense program was regarded as lesser in effectiveness, in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was believed to be rapidly increasing their nuclear stockpile, the efficacy of both would begin to enter a diminishing returns trend. When more became known about the cost and limitations of the Nike Zeus system, in the early 1960s the head of the department of defense under president John F. Kennedy, Robert McNamara, determined the ineffectiveness of the Nike-Zeus system, especially in its benefit-cost ratio compared to other options. For instance, fallout shelters would save more Americans for far less money.
Efficacy during a nuclear explosion:
Within a considerable radius from the surface of the nuclear fireball, 0–3 kilometers—largely depending on the explosion's height, yield and position of personnel—ducking and covering would offer negligible protection against the intense heat, blast and prompt ionizing radiation following a nuclear explosion. Beyond that range, however, many lives would be saved by following the simple advice, especially since at that range the main hazard is not from ionizing radiation but from blast injuries and sustaining thermal flash burns to unprotected skin. Furthermore, following the bright flash of light of the nuclear fireball, the explosion's blast wave would take from first light, 7 to 10 seconds to reach a person standing 3 km from the surface of the nuclear fireball, with the exact time of arrival being dependent on the speed of sound in air in their area. The time delay between the moment of an explosion's flash and the arrival of the slower moving blast wave is analogous to the commonly experienced time delay between the observation of a flash of lightning and the arrival of thunder during a lightning storm, thus at the distances that the advice would be most effective, there would be more than ample amounts of time to take the prompt countermeasure of 'duck and cover' against the blast's direct effects and flying debris. For very large explosions it can take 30 seconds or more, after the silent moment of flash, for a potentially dangerous blast wave over-pressure to arrive at, or hit, one's position.It is also worth noting that the graphs of lethal ranges of weapon effects as a function of yield, that are commonly encountered, are the unobstructed "open air", or "free air" ranges that assume among other things, a perfectly level target area, no passive shielding such as attenuating effects from urban terrain masking, e.g. skyscraper shadowing, and so on. Therefore, they are thus considered to present an overestimate of the lethal ranges that would be encountered in an urban setting in the real world, with this being most evident following a ground burst with explosive yield similar to first generation nuclear weapons.
Efficacy during a nuclear explosion:
To highlight the effect that being indoors, and especially below ground can make, despite the lethal open air radiation, blast and thermal zone extending well past her position at Hiroshima, Akiko Takakura survived the effects of the 16 kt atomic bomb at a distance of 300 meters from ground zero, sustaining only minor injuries, due in greatest part to her position in the lobby of the Bank of Japan, a reinforced concrete building, at the time of the nuclear explosion, and to highlight the protection conferred to an individual who is below ground during a nuclear air burst, Eizo Nomura survived the same blast at Hiroshima at a distance of 170 meters from ground zero. Nomura, who was in the basement of what is now known as the rest house, also a reinforced concrete building, lived into his early 80s.In contrast to these cases of survival, the unknown person sitting outside on the steps of the Sumitomo Bank next door to the Bank of Hiroshima on the morning of the bombing—and therefore fully exposed—suffered what would have eventually been lethal third- to fourth-degree burns from the near instant nuclear weapon flash if they hadn't been killed by the slower moving blast wave when it reached them approximately one second later.
Efficacy during a nuclear explosion:
Blast effects Outdoors To elucidate the effects on lying flat on the ground in attenuating a weapons blast, Miyoko Matsubara, one of the Hiroshima maidens, when recounting the bombing in an interview in 1999, said that she was outdoors and less than 1 mile from the hypocenter of the Little Boy bomb. Upon observing the nuclear weapons silent flash she quickly lay flat on the ground, while those who were standing directly next to her, and her other fellow students, had simply disappeared from her sight when the blast wave arrived and blew them away.Position of the body can have a considerable influence in protection from blast effects. Lying prone on the ground will often materially lessen direct blast effects because of the protective defilade effects of irregularities in the ground surface. Ground also tends to deflect some of the blast forces upward. Standing close to a wall, even on the side from which the blast is coming, also lessens some of the effect. Orientation of the body also affects severity of the effect of blast. Anterior exposure of the body may result in lung injury, lateral position may result in more damage to one ear than the other, while minimal effects are to be anticipated with the posterior surface of the body (feet) toward the source of the blast.The human body is more resistant to sheer overpressure than most buildings, however, the powerful winds produced by this overpressure, as in a hurricane, are capable of throwing human bodies into objects or throwing debris at high velocity, both with lethal results, rendering casualties highly dependent on surroundings. For example, Sumiteru Taniguchi recounts that, while clinging to the tremoring road surface after the Fat Man detonation, he witnessing another child being blown away, the destruction of buildings around him and stones flying through the air. Similarly, Akihiro Takahashi and his classmates were blown by the blast of Little Boy by a distance of about 10 meters, having survived due to not colliding with any walls etc. during his flight through the air. Likewise, Katsuichi Hosoya had a near identical testimony.
Efficacy during a nuclear explosion:
Indoors During the 2013 Chelyabinsk meteor explosion, a fourth-grade teacher in Chelyabinsk, Yulia Karbysheva, saved 44 children from potentially life-threatening ballistic window glass cuts by ordering them to hide under their desks when she saw the flash. Despite not knowing the origin of the intense flash of light, she ordered her students to execute a duck and cover drill. Ms. Karbysheva, who herself did not duck and cover but remained standing, was seriously lacerated when the explosion's blast wave arrived, and window glass blew in, severing a tendon in one of her arms; however, not one of her students, who she ordered to hide under their desks, suffered a cut. A follow up study of the effects of the meteor airburst determined that the windows most prone to breaking when exposed to a blast overpressure are those of school buildings, which tend to be large in area.While the bombings of Hiroshima and Nagasaki demonstrated that the urban area of glass breakage is nearly 16 times greater than the area of significant structural/building damage, although improved building codes since then may contribute to better building survival, there would be a higher likelihood of glass breakage and therefore potential injury/death for people near windows because many modern buildings have larger windows.
Efficacy during a nuclear explosion:
Flash & burn injuries The advice to cover one's exposed skin with anything that can cast a shadow, like the picnic blanket and newspaper used by the family in the film, may seem absurd at first when one considers the capabilities of a nuclear weapon. However, even the thinnest of barriers such as cloth or plant leaves would reduce the severity of burns on the skin from the thermal radiation with the flash light, similar in average emission spectrum/color to sunlight. The thermal radiation emits in the ultraviolet, visible light, and infrared range but with a higher light intensity than sunlight, and this combination of light rays is capable of delivering radiant burning energy to exposed skin areas. As the time-to-rise at peak and total duration of the emittance pulse of this burning thermal radiation is both prolonged and increases with larger explosive yield, it is usually at least a few seconds long for all high yield stockpiled weapons, creating the potential for protective countermeasures.High importance is given to closing eyelids and covering the eyes as temporary or permanent flash blindness is a risk potential without this covering, especially at night.A photograph taken about 1.3 km from the hypocenter of the Hiroshima bomb explosion showed that the shadowing effect of leaves from a nearby shrub protected a wooden utilities pole from charring discoloration due to the burst of thermal radiation; the rest of the telephone pole, which was not under the protection of the leaves, was charred almost completely black. The difference in required flash-energy necessary to produce essentially immediate, though transitory, non-propagating flaming, and that required to achieve a continued self-sustained propagating flaming are orders of magnitude in difference for most combustible materials. In the case of untreated timber it is largely dependent on the depth of char. While the propagating fires in both Japanese cities were almost exclusively ignited by the blast wave overturning charcoal cooking-braziers and similar secondary events, thermal flash-fires from untreated fabric and timber in the urban environment is considered potentially the widest destructive effect of the higher yield explosive devices.
Efficacy during a nuclear explosion:
The Nevada test site used for testing nuclear devices had a dry desert environment with low humidity, which repeatedly demonstrated the flash-combustion effect during tests. Many investigative films made on location there, such as The House in the Middle and others, focused on the combustion of fabrics and clothing.In the only human accounts at these high luminous intensities that are not of the more common Arc flash accidents, a number of the Hiroshima Maidens survived despite their close proximity to the explosion and in a range where the flash-fire of their customary Japanese summer attire, made of thin kimono cloth, was near instantaneous. As their clothing combusted, some of the Maidens performed an incomplete stop, drop and roll in an effort to extinguish the flames.
Efficacy during a nuclear explosion:
Initial nuclear radiation While not designed for those faced with low-yield neutron bombs or for those who are, in general, so close to the nuclear fireball that prompt/initial radiation would be life-threatening in the short-medium term, ducking and covering would nevertheless slightly reduce exposure to the initial gamma rays, specifically the portion emitted after the first flash of visible light. The initial gamma rays are defined as those emitted from the fireball & following mushroom cloud which can reach personnel on the ground for a total of approximately 1 minute, at which point the intensity of the radiation has diminished and the atmosphere itself is thick enough to act as full shielding.As approximately half of these gamma rays are emitted in the first second and the other half, over the following 59 alongside gamma rays being mostly emitted in a straight line, people lying on the ground will more likely have obstacles serving as radiation protection such as building walls, foundations, car engines, etc. between their bodies and the radiation emitted from both the fireball and the accompanying lower levels of radiation that continue to arrive at the ground for about 1 minute, during the mushroom cloud phase, which is termed "cloudshine". It would also give protection from the even smaller fraction of radiation that changes direction and is randomly reflected and scattered by the air/"skyshine". Approximately "One and one half inches"/37 mm of steel will reduce gamma dose by half. Its half-value thickness.
Efficacy during a nuclear explosion:
The effective gamma ray energy of the cloudshine is not especially high, 200 KeV.Unlike the relatively low-yield, or low explosive energy "A-bombs" dropped on Hiroshima and Nagasaki, which did result in a sizable proportion of injuries from prompt radiation, higher yield "hydrogen bombs" (thermonuclear weapons) are not expected to result in very many such injuries – as the range at which the ionizing radiation from higher yield devices is of primary concern, is already well inside the hyper-lethal blast and flash burn areas.
Efficacy during a nuclear explosion:
Delayed nuclear radiation, "fallout" Apart from the intrinsic "prompt effects" of nuclear detonations, that of thermal flash, blast and initial radiation releases, if any part of the fireball of the nuclear detonation contacts the ground, in what is known as a surface burst, another, comparatively slowly increasing, radiation hazard will also begin to form in the immediate area.Putting aside the possibility of the detonation occurring during an already established heavy rain-storm, the formation of this life-threatening "delayed nuclear radiation" manifests only when the altitude, or "height of burst" of the explosion, is such that both the fireball and the buoyant updrafts it creates, sufficiently heats and lifts the soil that was below it into the core of the mushroom cloud. Once there, the very hot radioactive isotope products of the nuclear reactions that produced the explosion, begin to coalesce with the cooler and denser soil. Upon cooling, this mixture begins to locally fall-out or precipitate-out of the mushroom cloud, falling back to the surface of the earth, near to the point of detonation, over the next few minutes and hours.While the duck and cover countermeasure, in its most basic form, offers a small to negligible protection against fallout, the technique assumes that after the effects of the blast and initial radiation subside, with the latter of which being no longer a threat after about "twenty seconds" to 1 minute post detonation, a person who ducks and covers will realize when it is wise to cease ducking and covering (after the blast and initial radiation danger has passed) and to then seek out a more sheltered area, like an established or improvised fallout shelter to protect themselves from the ensuing potential local fallout danger, as depicted in the film.
Efficacy during a nuclear explosion:
After all, "Duck and Cover" is a first response countermeasure only, in much the same way that "Drop, Cover and Hold On" is during an earthquake, with the advice having served its purpose once the earthquake has passed, and possibly other dangers—like a tsunami or fallout—may be looming, which then require movement to high ground and radiation protection, respectively.
Efficacy during a nuclear explosion:
However, if such a shelter is unavailable, the person should then be advised to follow the Shelter in Place protocol, or if given, emergency evacuation advice. Evacuation orders would entail exiting the area completely by following a path perpendicular to the wind direction and therefore perpendicular to the path of the fallout plume. Taking upper atmospheric winds into account, surface winds alone are not to be depended upon as indicative of the direction of fallout movement. "Sheltering in place" is staying indoors, in a preferably sealed tight basement, or internal room, for a number of hours, with the oxygen supply available in such a scenario being more than sufficient for 3+ hours in even the smallest average room, under the assumption that the improvised seal is perfect, until carbon dioxide levels begin to reach unsafe values and necessitate room unsealing for a number of minutes to create a room air change.In the era the advice was originally given, the most common nuclear weapons were weapons comparable to the US Fat Man and Soviet Joe-1 in yield. The most far-reaching dangers that initially come from the nuclear explosion of this, and higher, yield weapons as airbursts, are the initial flash/heat and blast effects and not from fallout. This is due to the fact that when nuclear weapons are detonated to maximize the range of building destruction, that is, maximize the range of surface blast damage, an airburst is the preferred nuclear fuzing height, as it exploits the mach stem phenomenon. This phenomenon of a blast wave occurs when the blast reaches the ground and is reflected. Below a certain reflection angle the reflected wave and the incident wave merge and form a reinforced horizontal wave; this is known as the 'Mach stem' (named after Ernst Mach) and is a form of constructive interference and consequently extends the range of high pressure. Air-burst fuzing also increases the range that people's skin will have a line-of-sight with the nuclear fireball. However, as a result of the high altitude of the explosion, most of the radioactive bomb debris is dispersed into the stratosphere, with a great column of air therefore placed between the vast majority of the bomb debris/fission reaction products and people on the ground for a number of crucial days before it falls out of the atmosphere in a comparatively dilute fashion. This "delayed fallout" is henceforth not an immediate concern to those near the blast. On the other hand, the only time that fallout is rapidly concentrated in a potentially lethal fashion in the local/regional area around the explosion is when the nuclear fireball makes contact with the ground surface, with an explosion that does so, being aptly termed a surface burst. For example, in the Operation Crossroads tests of 1946 on Bikini Atoll, using two explosive devices of the same design and yield, the first, Test Able (an air burst) had little local fallout, but the infamous Test Baker (a near surface shallow underwater burst) left the local test targets badly contaminated with radioactive fallout.
Efficacy during a nuclear explosion:
Widespread radioactive fallout itself was not recognized as a threat among the public at large before 1954, until the widely publicized story of the 15-megaton surface burst of the experimental test shot Castle Bravo on the Marshall Islands. The explosive yield of the Castle Bravo device the Shrimp was unexpectedly high and therefore correspondingly higher amounts of local fallout were produced. When this arrived at their location carried by the wind, this caused the 23 crew members on a Japanese fishing boat known as the Lucky Dragon to come down with acute radiation sickness with varying degrees of seriousness and due to complications in the treatment of the ship's radio operator months after the exposure, resulted in his death.
Efficacy during a nuclear explosion:
It is, however, unlikely that a well-funded belligerent with nuclear weapons would waste their weapons with fuzing to explode below or on the surface, as both test shot Baker and Castle Bravo were respectively. Instead, to maximize the range of city blast destruction and immediate death, an air burst is preferred, as the ≈500 meter explosion heights of the only nuclear weapons used on cities, Little Boy and Fat Man also attest to. Moreover, with air bursts the total amount of radiation contained in the fallout, in units of activity/becquerel, is somewhat less than the total that would be released from a surface or subsurface burst, as in comparison, depending on the height of burst, little to no neutron activation or neutron induced gamma activity of soil occurs from air bursts.
Efficacy during a nuclear explosion:
Therefore, the initial danger from concentrated local/'early' fallout (which takes on the color of the soil around the fireball, commonly with a dusty pumice or ash-like appearance, as experienced by the crew of the Lucky Dragon) remains low in a global nuclear war scenario. Instead the fallout most likely to be encountered by most survivors in this scenario is expected to be the less dangerous but widely spread global/'late' fallout. An air burst at optimum height will produce a negligible amount of early fallout.A notable comparison to underline this is found when one compares the 50 megaton air-burst Tsar Bomba, which produced no concentrated local/early fallout and thus no known deaths from radiation, with the surface burst of the 15 megaton Castle Bravo, which in comparison, due to the local fallout produced, was implicated in the death of 1 of 23 crew on the Lucky Dragon and made the entire Bikini Atoll unfit for further nuclear testing until enough time elapsed and the intensity of the radiation field had decayed to acceptable levels.
Efficacy during a nuclear explosion:
Furthermore, regardless of if a nuclear attack on a city is of the surface or air-burst variety or a mixture of both, the advice to shelter in place, in the interior of well-built homes, or if available, fallout shelters, as suggested in the film Duck and Cover, will drastically reduce one's chance of absorbing a hazardous dose of radiation. A real-world example of this occurred after the Castle Bravo test where, in contrast to the crew of the Lucky Dragon, the firing crew that triggered the explosion safely sheltered in their firing station until after a number of hours had passed and the radiation levels outside fell to dose rate levels safe enough for an evacuation to be considered. The comparative safety experienced by the Castle Bravo firing crew served as a proof of concept to civil defense personnel that shelter in place (or "buttoning up" as it was known then) is an effective strategy in mitigating the potentially serious health effects of local fallout.The minimum typical protection factor of the fallout shelters in US cities is 40 or more. In many cases these shelters are nothing more than the interior of pre-existing well-built buildings that have been inspected, and following their protection factors being calculated, re-purposed as fallout shelters.A protection factor of at least 40 means that the radiation shielding provided by the shelter reduces the radiation dose experienced by at least 40 times that which would be experienced outside the shelter with no shielding. "Protection factor" is equivalent to the modern term "dose reduction factor".
Efficacy during a nuclear explosion:
During the first hour after a nuclear explosion, radioactivity levels drop precipitously. Radioactivity levels are further reduced by about 90% after another 7 hours and by about 99% after 2 days. An accurate rule of thumb, applicable in the time-period of days to a few weeks post-detonation which approximates the radioactive dose rate generated by the decay of the myriad of isotopes present in nuclear fallout, is the "7/10 rule". The rule states that for each 7-fold increase in time the dose rate drops by a factor of 10. For example, assuming the fallout process has ended 24 hours post detonation and the dose rate would be lethal if a few hours of exposure occurred, 50 roentgens per hour, then 7 days after detonation the dose rate will be 5 R/hr and 49 days after detonation (7×7 days) the dose rate will be 0.5 R/hr at which point no special precautions would need to be taken and venturing outside into that dose rate for an hour or two would pose a close to negligible health hazard, thus permitting an evacuation to be done with acceptable safety to a known contamination free zone. Following a surface-burst nuclear detonation, approximately 80 percent of the fallout would be deposited on the ground during the first 24 hours.Some agencies that promoted "evacuate immediately" guidance as a response to potentially lethal fallout arriving, advice which may have been influenced by these agencies assuming simplistic single wind-driven cigar/Gaussian shaped fallout contours would be representative of reality, have since retracted this advice. This can actually result in higher radiation exposures as it would put people outdoors and in harm's way when the radiation levels would be highest. The Modeling and Analysis Coordination Working Group (MACWG) – which was set up to resolve conflicting advice given by various agencies, has reaffirmed that the best blanket advice that would reduce the number of casualties by the greatest amount is: "Early, adequate sheltering followed by informed, delayed evacuation."Expert advice published in the 2010 document Planning Guidance for Response to a Nuclear Detonation is to shelter in place, in an area away from building fires, for at least 1 to 2 hours following a nuclear detonation and fallout arriving, and the greatest benefit, assuming personnel are in a building with a high protection factor, is sheltering for no less than 12 to 24 hours before evacuation. Therefore, sheltering for the first few hours can save lives. Indeed, death and injury from local fallout is regarded by experts as the most preventable of all the effects of a nuclear detonation, being simply dependent on if personnel know how to identify an adequate shelter when they see one and enter one quickly, with the number of potential people saved being cited as in the hundreds of thousands. Or even higher if the remaining occupants of the city are made aware of the contaminated areas, by emergency systems, within hours of the event's aftermath. In 2009 to 2013 a further iteration on sheltering-in-place was made to determine the optimal improvised fallout-shelter-residence-times following a nuclear detonation, with computer analysis, and including a summary of prior studies and guidance. It was found that individuals should quickly get into the best intact building at least under 5 minutes distant in travel time following the detonation and they should stay there for at least 30 minutes before venturing out to find a shelter with a higher protection factor but that is a greater travel time away than 10 minutes. However, although this would be effective in cases where the initial building protection factor is less than about 10, it requires a high degree of individual situational awareness that may be optimistic to assume following the shock of a nuclear detonation. If a building with a PF of 20 or more is nearby, such as the fallout shelters depicted in the film, in the vast majority of fallout circumstances, it would not be advisable to leave it until 3+ hours have elapsed following the initial arrival of the local fallout.Following a single IND (improvised nuclear device) detonation in the US, the National Atmospheric Release Advisory Center (NARAC) would, within minutes to at most hours, after the detonation have a reliable prediction of the fallout plume size and direction. When armed with this prediction they would then begin attempting to corroborate this with readings from radiation survey meter equipment that would fly over close to the ground in the affected area by means of helicopter or drone (UAV) aircraft on material intelligence gathering missions, which would also follow within tens of minutes to at most hours after the detonation.Once a general outline and direction of the fallout is determined, disseminating this information to citizens sheltering-in-place would soon follow, by means of loudspeaker, radio, cell phone etc., with a "Fallout App" containing maps for smart phones being regarded as an area of interest so that survivors don't inadvertently evacuate downwind further into harm's way. A number of questions the affected public are likely to have after a nuclear detonation have been compiled and pre-answered to help communications in the immediate aftermath.
Efficacy during a nuclear explosion:
Nuclear electromagnetic pulse, non-lethal In respect to the other non-lethal weapon effects from an IND detonated on or near the surface, the detonation's blast wave would likely produce a momentary electric grid blackout due to the loss of a large portion of a city's electrical equipment drawing power/electrical load, while the electromagnetic pulse (EMP) from a surface/ground-burst explosion would cause little damage outside the blast area, so cell phone towers that survive the blast should be capable of carrying communications. But if communications during the 9/11 attacks or after a major hurricane are anything to go by and the cell phone network towers survive, the service would be overloaded (a mass call event) and thereby made useless soon after; however, if prior arrangements between the cell network and emergency responders are made to give them priority and bar access to all other individuals, then it may be an effective service.
Efficacy during a nuclear explosion:
The Civil Defense (CD) shelters, as depicted in the film, were stocked for such an eventuality. They contained among other things, at least one ruggedized CDV-715 radiation survey meter and one CD emergency radio receiver which would respectively be used to facilitate a safe delayed evacuation, regardless of outside help though if communications continued, the radio receiver was to inform them of the outside situation as it developed.
Long-term survival:
The dubious assumption that "only the cockroaches" would survive the post-war fallout environment was frequently used in an attempt to criticize Duck and Cover during the height of the Cold War, contextually at a time when discussion of a total war involved the much greater US-Soviet arsenal of nuclear weapons that were then in existence. However even at that time, this assumption was shown to be misled, as scientifically detailed in areas including the 1988 book Would the Insects Inherit the Earth and Other Subjects of Concern to Those Who Worry About Nuclear War.In material terms, the primary life-threatening risk survivors and downwinders could face in the long-term after a nuclear explosion or war, is the "nuclear famine" issue, the potential continuation of hostilities by conventional warfare and radioactive contamination of the food and water supplies, disrupting the normal distribution and consumption, of these vital goods.
Long-term survival:
Cold War continuity of government planners and civil defense organizations in general have always had this disruption, or "nuclear famine" issue in mind, as widespread infrastructure destruction producing starvation conditions was also seen during and after WWII. Papers such as On Reorganizing After Nuclear Attack, and Survival of the relocated population of the U.S. after a nuclear attack by Nobel Prize winner, Eugene Wigner, detail the thought and attention that went into long-term survival, relocation and reconstruction.Numerous human and agricultural decontamination countermeasures exist for the two most persistent and biologically significant isotopes, cesium-137, strontium-90 and long-lived fallout contamination in general, with the most visible and immediate act that will prevent a potentially large dose to the public, taking the form of using shielded bulldozers to skim off the layer of topsoil that the fallout had settled on, a restorative practice that was fielded upon the creation of Lake Chagan. The creation of human decontamination tents at the entrances of buildings and when lower levels of risk exist, the use of clean room air showers as a form of contamination control to prevent the spread of radionuclides that adhere to dust, into building interiors, would also be advisable to reduce the elevated risk of radiation induced cancer that would otherwise occur. Air showers may be paired with electrostatic precipitators to attract the dust to collection plates, forestalling a re-suspension that may otherwise be inhaled. Moreover use of the open access radioecology research on decontamination and conventional agriculture in the Chernobyl-Polesie State Radioecological Reserve and around the Fukushima accident, would both be implemented in the event of any widespread fallout contamination, with particular emphasis on bioremediation of radionuclides from soil and aquifers. Although less of a hazard than external exposure, internal decontamination, that may be required after assessment in a whole-body counting session, in the long term may, as is now, be conducted with binding-and-excretion promoting chelation therapy, with ammonium-ferric-hexacyano-ferrate (AFCF)/"Giese salt", Radiogardase and DPTA all proven effective.Comparable binding/chelation treatment systems, developed and deployed due to the Fukushima reactor-water decontamination mandate, includes the mobile reverse osmosis Landysh water treatment ship, the zeolite-rock based "Actiflo", the "SARRY" ion exchange cesium removal system, based on silicotitanate "IONSIV" crystalline rock, and most recently the 62 multi-nuclide removal system (NURES), frequently referred to as the Advanced Liquid Processing System (ALPS). In 2016 tritiated water also began to be filtered.Researchers at the American Chemical Society have further suggested that aquaponics would be an ideal socially-acceptable solution in the post-contamination environment, as it does not use soil to grow fish and vegetables, thus completely alleviating the radiophobia surrounding food that always follows long-lived contamination incidents. Others who have approached the food problem from a far more extreme view, assuming far worse events such as comet impacts, as discussed in the book Feeding Everyone No Matter What, have suggested; natural-gas-digesting bacteria the most well known being methylococcus capsulatus, that is presently used as a feed in fish farming, Bark bread a long-standing famine food using the edible inner bark of trees once a part of Scandinavian history during the Little Ice Age and the expansion of leaf protein concentrate and larger scale wood digesting fungiculture for fungal protein, with the most common of which being shiitake mushrooms and honey fungi, as they do not need sunlight or soil to grow. More advanced techniques mentioned, that are not presently economical also include variations of wood or cellulosic biofuel production, which typically already creates edible sugars/xylitol from inedible cellulose, as an intermediate product before the final step of alcohol generation.
Historical and psychological assessment:
Some historians and filmmakers, exemplified by the 1982 The Atomic Cafe, have thus far sought to dismiss civil defense advice as mere propaganda, despite, as other historians have found, detailed scientific research programs behind the much-mocked government civil defense pamphlets of the 1950s and 1960s, including the prompt advice of ducking and covering.In U.S. Army training, soldiers are taught to immediately fall down, covering face and hands in much the same way as is described by the advice to duck and cover.The exercises of Cold War civil defense are seen by historian Guy Oakes in 1994, as having less practical use than psychological use: to keep the danger of nuclear war high on the public mind, while also attempting to assure the American people that something could be done to defend against nuclear attack.Moreover, civil defense was not solely a US-UK or nuclear club phenomenon; countries with long histories of neutrality, such as Switzerland, are "foremost in their civil defence precautions." The Swiss civil defense network has an overcapacity of nuclear fallout shelters for the country's population size, and by law, new homes must still be built with a fallout shelter as of 2011.
Tornadoes:
Ducking and covering does have certain applications in other, more natural disasters. In states prone to tornadoes, school children are urged to "duck and cover" against a solid inner wall of a school, if time does not permit seeking better shelter—such as a storm cellar—during a tornado warning. The tactic is also widely practiced in schools in states along the West Coast of the United States, where earthquakes are commonplace. Ducking and covering in either scenario would theoretically afford significant protection from falling or flying debris.
Earthquakes:
In an earthquake, which are generally of a natural tectonic plate origin (although they can be artificially generated by the detonation of a nuclear explosive device in which sufficient energy is transmitted into the ground, with an extreme case to serve as an example of this phenomenon being the Operation Grommet Cannikin test of the 5 megaton W71 warhead exploded deep underground on Amchitka Island in 1971, which produced a seismic shock quake of 7.0 on the Richter magnitude scale) people are encouraged, regardless of the cause of the quake, to "drop, cover and hold on": to get underneath a piece of furniture, cover their heads and hold on to the furniture. This advice also encourages people not to run out of a shaking building, because a large majority of earthquake injuries are due to broken bones from people falling and tripping during shaking. While it is unlikely that "drop, cover and hold on" will protect against a building collapse, in earthquake-prone areas in the United States building codes require that buildings withstand quakes up to an expected magnitude enough to allow evacuation after shaking stops. and thus a building collapse of these structures (even during an earthquake) is rare. "Drop, cover and hold on" may not be appropriate for all locations or building types, but the Red Cross advises it is the appropriate emergency response to an earthquake in the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nickel(II) iodide**
Nickel(II) iodide:
Nickel(II) iodide is an inorganic compound with the formula NiI2. This paramagnetic black solid dissolves readily in water to give bluish-green solutions, from which crystallizes the aquo complex [Ni(H2O)6]I2 (image above). This bluish-green colour is typical of hydrated nickel(II) compounds. Nickel iodides find some applications in homogeneous catalysis.
Structure and synthesis:
The anhydrous material crystallizes in the CdCl2 motif, featuring octahedral coordination geometry at each Ni(II) center. NiI2 is prepared by dehydration of the pentahydrate.NiI2 readily hydrates, and the hydrated form can be prepared by dissolution of nickel oxide, hydroxide, or carbonate in hydroiodic acid. The anhydrous form can be produced by treating powdered nickel with iodine.
Applications in catalysis:
NiI2 has some industrial applications as a catalyst in carbonylation reactions. It is also has niche uses as a reagent in organic synthesis, especially in conjunction with samarium(II) iodide.Like many nickel complexes, those derived from hydrated nickel iodide have been used in cross coupling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cadmium tetrafluoroborate**
Cadmium tetrafluoroborate:
Cadmium tetrafluoroborate is an ionic, chemical compound with the formula Cd(BF4)2. It is a crystalline solid, which is colorless and odorless. Cadmium tetrafluoroborate is most frequently used in the industrial production of high-strength steels, its purpose being to prevent hydrogen absorption, a source of post-production cracking of the metal, in the treated steels. Another application of the chemistry of cadmium tetrafluoroborate is fine tuning of the size of cadmium telluride nanomaterials.
Cadmium tetrafluoroborate:
While the use of cadmium tetrafluoroborate is limited, concerns about limited or chronic exposure to this substance should be brought to the attention of a physician or other trained medical staff. Exposure to cadmium tetrafluoroborate, via ingestion, contact with the skin or mucous membranes, or inhalation can have lasting and harmful health effects.
Preparation:
Cadmium tetrafluoroborate may be prepared from the reaction between an aqueous solution of fluoroboric acid and cadmium carbonate or cadmium oxide: OBF aq CdCO Cd BF aq HCO aq )−+H2O(l) OBF aq CdO Cd BF aq HO aq )−+H2O(l) It is also possible to prepare cadmium tetrafluoroborate through an oxidation-reduction reaction implementing nitrosyl tetrafluoroborate: Cd NOBF Cd BF NO
Structure:
Cadmium tetrafluoroborate Cadmium tetrafluoroborate is an ionic compound formed from the two, ionic species Cd2+ and BF4−. At room temperature it forms colorless, odorless crystals which are soluble in polar solvents such as water or ethanol. At room temperature, the hydrated salt, Cd(BF4)2·6H2O, exists in a monoclinic crystal system, though this is temperature dependent. Two, first-order phase transitions have been noted in the literature for this material, one each at 324 K and 177 K, representing a change in the crystal system from monoclinic to trigonal and from trigonal to either monoclinic or triclinic, respectively. The quasi-trigonal geometry of the cadmium tetrafluoroborate hexahydrate crystal is unique among the first-row transition metal tetrafluoroborates and perchlorates, which have predominately hexagonal structures.
Structure:
Related transition-metal complexes The Cd2+ species of cadmium tetrafluoroborate may associate with various ligands to form transition-metal complexes. The structural formulas and geometries for such complexes can vary depending upon coordination number of the complex and the electronic properties of the ligands (see also, HSAB theory). However, two general forms may predominate: (i) [Cd(L)n(BF4)m], where L and BF4− are ligands in the inner-sphere, and (ii) [Cd(L)n](BF4)2, where BF4− is located in the outer-sphere; for both, n=1,2,…,6. The literature contains reports of distorted octahedral geometries for Cadmium tetrafluoroborate complexes with nitrogen-containing ligands such as pyrazoles and imidazoles and porphyrins. Given the structural formulas for Cadmium tetrafluoroborate complexes noted in the literature however, such as [Cd(L)4(BF4)2], it is likely that tetrahedral geometries are also possible in such complexes.
Uses:
Electroplating The most significant, industrial use of Cd(BF4)2 is in the electroplating of high-strength steels. Here, species such as cadmium tetrafluoroborate (or Cd-Ti or CdCN) are deposited on the surface of steels in an electroplating process which inhibits absorption of hydrogen onto the surface of the steels, a source of cracking following baking of the metal. Optimization of the electroplating process, adjusting electrolyte concentrations in Cadmium tetrafluoroborate mixes, has been explored in the literature. Among other methods of electroplating, cadmium tetrafluoroborate baths have middling efficiency. It has, for instance, been demonstrated that traditional cyanide bathes (e.g. CdCN or ZnCN) and variants there-of provide more efficient distribution of current density during electroplating, resulting in steels which could bear greater loads.
Uses:
Nanomaterials A method of etching of CdTe nanocrystals which removes Cd from the surface of the nano-structures via attack by tetrafluoroborate anions has been reported in the literature. While the presence of Cd-F surface bonds and dissociation of Cd from the surface of the nano-structures are clear from the investigation, complex formation of Cd with BF4− in solution was not discussed though may be inferred from the spectrophotometric results.
Uses:
Determination of boron in steels by solvent extraction Methodology has been reported for the determination of boron concentration in steels using cadmium tetrafluoroborate complex formation during solvent extraction to facilitate indirect atomic absorption measurements. Tetrafluoroborate, formed from acid extraction of boron for a steel sample using boric acid, associates with a transition metal complex of Cd2+ and forms a complex which is measureable by atomic absorption spectroscopy. Similar procedures have been implemented for the same purpose using other transition metals and for determination of boron in high-purity silicon using other cadmium tetrafluoroborate transition metal complexes.
Hazards and Safety:
Biological hazards, safety, and treatment Cadmium tetreafluoroborate is a caustic substance, particularly when in aqueous solution. Multiple routes of exposure, such as ingestion, inhalation, or contact with the skin or mucous membranes, are available through contact with aqueous cadmium tetrafluorobromate. Target biological systems following exposure include the lungs, kidneys, and liver. Symptoms of cadmium tetrafluoroborate exposure include nausea, vomiting, fever, irritation of the mucous membranes (e.g. upper respiratory tract, eyes) and skin, coughing, wheezing, or difficulty breathing. The mechanism of toxicity of this substance is related to cadmium poisoning and exposure to borates and hydrofluoric acid. The compound functions in solution as a weakly acidic inorganic salt, neutralizing bases. After initial exposure, thorough rising of the affected area with water is recommended. However, seeking medical attention is strongly advised as treatment for exposure to Cd or F containing compounds such as cadmium tetrafluoroborate generally involves intravenous administration (I.V.) of calcium chloride and sodium bicarbonate for the purpose of maintaining blood pH and sequestering Cd2+ and BF4− in insoluble salts.
Hazards and Safety:
Chronic exposure Chronic exposure to this substance may have negative health consequences. According to its OSHA, IARC, and ACGIH ratings, cadmium tetrafluoroborate is recognized as a carcinogenic substance. Further effects of chronic exposure may include hypocalcaemia and edemas of the respiratory system.
Non-biological hazards and safety Although this compound is a negligible fire hazard, combustion of cadmium tetrafluoroborate produces hazardous decomposition products including cadmium/cadmium oxide and hydrogen fluoride. Therefore, cadmium tetrafluoroborate is stored out of direct light, in a cool environment, and away from other flammable materials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magnetic Joe**
Magnetic Joe:
Magnetic Joe is a puzzle-platform game developed for mobile phones by Hungarian studio Most Wanted Entertainment and published by HD publishing in 2006. The objective is to guide a magnetic metal ball, known as Joe, to a designated exit in each level using various magnetic forces.
Gameplay:
The game features a one-button control scheme where the player presses a single button, or touches the screen, to activate Joe's magnetism.
Gameplay:
When Joe passes near a magnetic cell, his magnetism activates. The player can see a lightning effect between Joe and the magnet, and Joe moves towards the magnet. As he moves closer to the magnet, his movement and rotation changes. The player can control the ball's movement by timing when the magnetism is activated. Different magnets push or pull the ball in different directions and are marked accordingly.
Gameplay:
Joe can move around by rolling and bouncing. However, frequent hazards on the map, such as spiked floors and walls, add a challenge to this form of movement.
Release:
The original game was released in 2006 for mobile phones. It includes 50 levels divided into 3 "worlds". Randomly generated 'secret' levels can be played by entering a code. Levels are generated based on the code entered.
Reception:
The original game was well received by the gaming press, praising its simple and great game mechanics. It won a 'best casual game' award in 2006. "Magnetic Joe is incredibly simple and incredibly addictive, the game can be frustrating and extremely rewarding. Absolutely brilliant."
Legacy:
Magnetic Joe 2 (mobile / J2ME) The second game added teleporters, a cannon, a lift, and breakable walls. New game characters are Josephine (Joe's Girlfriend), Invisible Joe, Bad Joe, and Robot Joe. In "collect mode", the player has to first find three "Little Joe"-s before the exits are activated and it's possible to win. In "enemy mode", there are enemies to avoid in the levels. Magnetic Joe 2 has a skateboarding minigame.
Legacy:
Magnetic Joe (Nintendo DSi) A Nintendo DSi version of the game added a story mode and local wi-fi multiplayer. In story mode, the player must complete levels through several "worlds" that feature enemies, obstacles, and bosses unique to that world. In the wi-fi multiplayer player challenge, two players play on the same level at once.
Legacy:
There are three mdodes: Classic, Time, and Collect. Classic mode has the same rules as the original mobile game. In Time Mode, levels have to be completed within a predefined time limit. Meanwhile, in Collect Mode, the player needs to collect special items before moving to the exit. Each of the game modes also feature Hard variations in which the player is limited in the number of times Joe can touch an obstacle before losing.
Legacy:
Magnetic Joe 1 & 2 (iPhone) Magnetic Joe 1 & 2 for iOS is a port of the original Magnetic Joe which adds online leaderboards. Each level is timed, and users can submit their times to an online leaderboard to beat the predefined 'developer time' for each level. By improving their times, players are able to unlock new characters to play in the game. The game was rem App Store due to the demise of the publisher. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hard flaccid syndrome**
Hard flaccid syndrome:
Hard flaccid syndrome (HFS), also known as hard flaccid (HF), is a chronic painful condition characterized by a semi-rigid penis at the flaccid state, a soft glans at the erect state, pelvic pain, low libido, erectile dysfunction, erectile pain, pain on ejaculation, penile sensory changes (numbness or coldness), lower urinary tract symptoms, contraction of the pelvic floor muscles, and psychological distress. Other complaints include rectal and perineal discomfort, cold hands and feet, and a hollow or detached feeling inside the penile shaft. The majority of HFS patients are in their 20s–30s and symptoms significantly affect one's quality of life.Sufferers typically report the onset of symptoms after trauma due to a mishap during sexual intercourse or tough masturbation, specifically a traumatic injury at the base of the erect penis, possibly affecting the dorsal artery of the penis, the bulbourethral and the pudendal arteries, as well as the pudendal and dorsal nerve of the penis. Penile sensory and textural changes, as well as changes in appearance, are hallmarks of the condition and serve to distinguish HFS from classic chronic pelvic pain syndrome or BPH.Both biological and psychological influences contribute to the condition by altering the neurovascular supply to the muscles of the pelvic floor and penis. One theory proposes that HFS is a result of an initial stress which triggers an abnormal fight or flight response resulting in increased sympathetic stimulation to the muscles of the pelvis via the perineal branch of the pudendal nerve. In turn, a surge of adrenaline, noradrenaline and cortisol is released from the efferent nerve fibers promoting increased blood flow to the bulbospongiosus, ischiocavernous and levator ani muscles as well as sustained muscle contraction which results in obstructed venous outflow from the penis via compression of the deep dorsal vein and pelvic myoneuropathy secondary to neurogenic inflammation.
Treatment:
Treatment may include medications for pain management, pelvic floor physical therapy, biofeedback, and stress reduction techniques. Men experiencing anxiety or depression may benefit from counseling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Võ phục**
Võ phục:
Võ phục (Chữ Hán: 武服) is a Vietnamese term that refers to a martial arts uniform (which may include a ranking belt), mainly associated with Vietnamese martial arts, particularly Vovinam.
Usage:
The term võ phục may be used alone in the context of Vietnamese martial arts or often to refer to the martial arts suit worn by practitioners of Vovinam. The Vietnamese alphabet pronunciation (and writing) of the word võ phục differs from another common Vietnamese term vô phúc (無福) "bad luck." Usage with Other Martial Art Forms Võ phục may be used before the accepted name of another martial art in order to refer to the uniform of that particular martial art. For example, võ phục Judo refers to the martial arts uniform used in Judo, known in Japanese as a Jūdōgi (wikt:柔道着). The term functions in the same way as "martial arts uniform of," in English: võ phục (martial arts uniform of) Judo.
Usage:
History From 1938 to 1964, there was no official coloration associated with võ phục. However, after a meeting between Vietnamese martial arts masters in 1964, indigo/deep blue was chosen to be the official color of Vietnamese martial arts uniforms.
Usage:
The official color was not necessarily adopted by all practitioners, as evidenced throughout 1973-1990 when many practitioners under a separate development of Vovinam, the "Viet Vo Dao Federation," wore black uniforms. By 1990, however, after further meetings between councils, indigo had been adopted by the majority of Vovinam practitioners in and outside of Vietnam. Deep blue/indigo (Vietnamese: Lam) is now the internationally accepted color of Vo Phuc for Vietnamese martial arts.
Construction:
The võ phục of Vovinam are very similar, if not nearly identical, to the keikogi of Japanese martial arts. Both top and bottom of the uniform are constructed in much the same way as keikogi, and the two are likely interchangeable if not for obvious differences in coloration.
The thickness of võ phục varies by school, preference, or size of a practitioner.
Wearing:
As with the Japanese keikogi, the võ phục top should be worn with the left front panel tied over the right. Pants are worn the same way as Japanese keikogi pants, and are adjusted according to the preference of the practitioner by loosening or tightening the appropriate fitting cords provided.
Belt:
The võ phục is often worn with the corresponding ranking belt (Vietnamese wikt:đai, wikt:帶) of the practitioner around the waist. Belts worn with võ phục for Vovinam are similar to those of Japanese martial arts in that they are constructed using the same techniques and materials. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nonmetal**
Nonmetal:
Nonmetal may refer to: Nonmetal (chemistry), a chemical element with relatively low density and high electronegativity Nonmetal (astrophysics), refers only to the elements hydrogen and helium | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Precision approach radar**
Precision approach radar:
Precision approach radar (PAR) is a type of radar guidance system designed to provide lateral and vertical guidance to an aircraft pilot for landing, until the landing threshold is reached. Controllers monitoring the PAR displays observe each aircraft's position and issue instructions to the pilot that keep the aircraft on course and glidepath during final approach. After the aircraft reaches the decision height (DH) or decision altitude (DA), further guidance is advisory only. The overall concept is known as ground-controlled approach (GCA), and this name was also used to refer to the radar systems in the early days of its development.
Precision approach radar:
PAR radars use a unique type of radar display with two separate "traces", separated vertically. The upper trace shows the elevation of a selected aircraft compared to a line displaying the ideal glideslope, while the lower shows the aircraft's horizontal position relative to the runway midline. GCA approaches normally start with the controller relaying instructions to bring the aircraft into the glidepath and then begin any corrections needed to bring it onto the centerline.
Precision approach radar:
Precision approach radars are most frequently used at military air traffic control facilities. Many of these facilities use the AN/FPN-63, AN/MPN, or AN/TPN-22. These radars can provide precision guidance from a distance of 10 to 20 miles down to the runway threshold. PAR is mostly used by the Navy, as it does not broadcast directional signals which might be used by an enemy to locate an aircraft carrier.
Non-traditional PAR using SSR transponder reply:
There are systems that provide PAR functionality without using primary radar. These non-traditional PAR systems use transponder multilateration, triangulation and/or trilateration.
Non-traditional PAR using SSR transponder reply:
One such system, Transponder Landing System (TLS) precisely tracks aircraft using the mode 3/A transponder response received by antenna arrays located near the runway. These antennas are part of a measurement subsystem that is used to precisely determine the aircraft 3-dimensional position using TOA, DTOA and AOA measurement techniques. The aircraft position is then displayed on a high-resolution color graphics terminal that also shows the approach centerline and the glide path. A GCA controller is then able to use this screen for reference to issue GCA instructions to the pilot.
Non-traditional PAR using SSR transponder reply:
The signal strength for the secondary surveillance radar subsystem of a non-traditional PAR is not attenuated by rain since the frequency is within the long range band, L-band. Therefore, a non-traditional PAR does not experience noticeable rain fade and in the case of the TLS has an operational range of 60 nm.
This system is co-operative depending, it means that in the case of transponder failure no aircraft detection will be provided.
Flight inspection of the PAR:
A traditional PAR flight inspection procedure is performed without a navigation signal available to compare directly to a truth reference. A traditional PAR is flight inspected by comparing written notes between two observers, one taking notes at a truth reference system such as a theodolite and the other observer taking notes while observing the radar console; see ICAO Document 8071. The Transponder Landing System (TLS) non-traditional PAR can transmit an ILS signal that corresponds to the aircraft position relative to the precision approach. Therefore, the graphical depiction can be directly verified using Instrument Landing System (ILS) flight inspection techniques. This direct measurement removes some ambiguity from the PAR flight inspection process. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photometric system**
Photometric system:
In astronomy, a photometric system is a set of well-defined passbands (or optical filters), with a known sensitivity to incident radiation. The sensitivity usually depends on the optical system, detectors and filters used. For each photometric system a set of primary standard stars is provided.
Photometric system:
A commonly adopted standardized photometric system is the Johnson-Morgan or UBV photometric system (1953). At present, there are more than 200 photometric systems.Photometric systems are usually characterized according to the widths of their passbands: broadband (passbands wider than 30 nm, of which the most widely used is Johnson-Morgan UBV system) intermediate band (passbands between 10 and 30 nm wide) narrow band (passbands less than 10 nm wide)
Photometric letters:
Each letter designates a section of light of the electromagnetic spectrum; these cover well the consecutive major groups, near-ultraviolet (NUV), visible light (centered on the V band), near-infrared (NIR) and part of mid-infrared (MIR). The letters are not standards, but are recognized by common agreement among astronomers and astrophysicists. The use of U,B,V,R,I bands dates from the 1950s, being single-letter abbreviations.With the advent of infrared detectors in the next decade, the J to N bands were labelled following on from near-infrared's closest-to-red band, I.
Photometric letters:
Later the H band was inserted, then Z in the 1990s and finally Y, without changing earlier definitions. Hence, H is out of alphabetical order from its neighbours, while Z,Y are reversed from the alphabetical – higher-wavelength – sub-series which dominates current photometric bands.
Note: colors are only approximate and based on wavelength to sRGB representation (when possible).Combinations of these letters are frequently used; for example the combination JHK has been used more or less as a synonym of "near-infrared", and appears in the title of many papers.
Filters used:
The filters currently being used by other telescopes or organizations.
Units of measurements: Å = Ångström nm = nanometre μm = micrometreNote: colors are only approximate and based on wavelength to sRGB representation (when possible). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Item-total correlation**
Item-total correlation:
The item-total correlation test arises in psychometrics in contexts where a number of tests or questions are given to an individual and where the problem is to construct a useful single quantity for each individual that can be used to compare that individual with others in a given population. The test is used to see if any of the tests or questions ("items") do not have responses that vary in line with those for other tests across the population. The summary measure would be an average of some form, weighted where necessary, and the item-correlation test is used to decide whether or not responses to a given test should be included in the set being averaged. In some fields of application such a summary measure is called a scale.
The test:
An item-total correlation test is performed to check if any item in the set of tests is inconsistent with the averaged behaviour of the others, and thus can be discarded. The analysis is performed to purify the measure by eliminating ‘garbage’ items prior to determining the factors that represent the construct; that is, the meaning of the averaged measure. It is supposed that the result for a particular test on a given individual is initially used to produce a score, where the scores for different tests have a similar range across individuals. An overall measure for an individual would be constructed as the average of the scores for a number of different tests. A check on whether a given test behaves similarly to the others is done by evaluating the Pearson correlation (across all individuals) between the scores for that test and the average of the scores of the remaining tests that are still candidates for inclusion in the measure. In a reliable measure, all items should correlate well with the average of the others. A small item-correlation provides empirical evidence that the item is not measuring the same construct measured by the other items included. A correlation value less than 0.2 or 0.3 indicates that the corresponding item does not correlate very well with the scale overall and, thus, it may be dropped. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Boccia classification**
Boccia classification:
Boccia classification is the classification system governing boccia, a sport designed specifically for people with disabilities. Classification is handled by Cerebral Palsy International Sports and Recreation Association. There are four classifications for this sport. All four classes are eligible to compete at the Paralympic Games.
Definition:
Boccia classification at the Paralympic Games is the basis for determining who can compete in the sport, and within which class. It is used for the purposes of establishing fair competition. Entry is eligible to athletes with cerebral palsy or severe disabilities (such as muscular dystrophy, brain or spinal injury). There are four boccia classifications based upon functional ability. This sport has rules that were designed specifically with people with disabilities in mind.
Governance:
In 1983, the rules for this sport and approval for classification was done by the Cerebral Palsy International Sports and Recreation Association (CP-ISRA). This remained the case in 2012.
History:
In 1983 CP-ISRA was responsible for the classification of competitors in Boccia. Their classification followed the system designed for field athletics events and originally used five cerebral palsy classes for competitors. Class 1 competitors could compete in co-ed team events which included three competitors from class 1 and class 2, with one required to be a class 1 competitor. They could also compete in the individual events. By the early 1990s, boccia classification had moved away from medical based system to a functional classification system. In 1992, the International Paralympic Committee formally took control of governance for many disability sports. Because of issues in objectively identifying functionality that plagued the post Barcelona Games, the IPC unveiled plans to develop a new classification system in 2003. This classification system went into effect in 2007, and defined ten different disability types that were eligible to participate on the Paralympic level. It required that classification be sport specific, and served two roles. The first was that it determined eligibility to participate in the sport and that it created specific groups of sportspeople who were eligible to participate and in which class. The IPC left it up to International Federations to develop their own classification systems within this framework, with the specification that their classification systems use an evidence based approach developed through research.
Eligibility:
As of 2012, people with physical disabilities are eligible to compete in this sport. The level of physical impairment must be significant, such as brain injury or total body impaired function (as in the case of cerebral palsy). In 1983, CP-ISRA set the eligibility rules for classification for this sport. They defined cerebral palsy as a non-progressive brain legion that results in impairment. People with cerebral palsy or non-progressive brain damage were eligible for classification by them. The organisation also dealt with classification for people with similar impairments. For their classification system, people with spina bifida were not eligible unless they had medical evidence of loco-motor dysfunction. People with cerebral palsy and epilepsy were eligible provided the condition did not interfere with their ability to compete. People who had strokes were eligible for classification following medical clearance. Competitors with multiple sclerosis, muscular dystrophy and arthrogryposis were not eligible for classification by CP-ISRA, but were eligible for classification by International Sports Organisation for the Disabled for the Games of Les Autres.
Classes:
There are four classes in Boccia. Athletes are grouped according to their impairment as follows: BC1. Athletes who have Cerebral Palsy. They either kick or throw the ball. They may request the use of an assistant, providing the assistant remains outside of the athlete's box.
BC2. Athletes who have Cerebral Palsy but are able to better throw the ball than BC1 players. They are not allowed the use of an assistant.
BC3. Athletes with a severe physical disability (Cerebral Palsy or other) that prevents them from throwing or kicking the ball three metres. They require assistive equipment such as a ramp. An assistant is also allowed within the athlete's box, however they are not allowed to observe gameplay.
BC4. Athletes who have a significant physical disability (non Cerebral Palsy) that makes it difficult for them to throw the ball. No assistants or assistive devices may be used.These classes have some parallels with the cerebral palsy sport classification system used by CP-ISRA for the CP1 and CP2 classes.
Process:
For a boccia athlete to compete at the Paralympic Games, international classification by an International Classification Panel is required. The International Classification Panel will allocate a class to the athlete and rule which (if any) assistive equipment the athlete may use. Their ruling overrides all prior classifications including those of a national basis. Athletes must be classified according to their disability and level of impairment. The classification process normally involves a physical assessment to authenticate the disability and evaluate the degree of limitation. The athlete will be observed in competition action. Results will place the athlete in one of the four classes (see Classes): this evaluation cannot be used for sports outside of Boccia. For Australian competitors in this sport, the sport and classification is managed by the Australian Paralympic Committee. There are three types of classification available for Australian competitors: Provisional, national and international. The first is for club level competitions, the second for state and national competitions, and the third for international competitions.
At the Paralympic Games:
At the 1992 Summer Paralympics, cerebral palsy disability types were eligible to participate, with classification being run through CP-ISRA, with classification based on disability type. At the 2000 Summer Paralympics, 7 assessments were conducted at the Games. This resulted in 0 class changes. 1 PNS protest was filed and the classification was upheld. Boccia competition at the London 2012 Summer Olympics will be held at ExCeL Exhibition Centre from 2 September to 8 September. Competition play is mixed: 104 men and women will compete for seven medal events. In each team event, one team of three athletes per country is allowed.For the 2016 Summer Paralympics in Rio, the International Paralympic Committee had a zero classification at the Games policy. This policy was put into place in 2014, with the goal of avoiding last minute changes in classes that would negatively impact athlete training preparations. All competitors needed to be internationally classified with their classification status confirmed prior to the Games, with exceptions to this policy being dealt with on a case-by-case basis. In case there was a need for classification or reclassification at the Games despite best efforts otherwise, boccia classification was scheduled for September 8 at Carioca Arena 2. For sportspeople with physical or intellectual disabilities going through classification or reclassification in Rio, their in competition observation event is their first appearance in competition at the Games.
Future:
Going forward, disability sport's major classification body, the International Paralympic Committee, is working on improving classification to be more of an evidence-based system as opposed to a performance-based system so as not to punish elite athletes whose performance makes them appear in a higher class alongside competitors who train less. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FKM**
FKM:
FKM is a family of fluorocarbon-based fluoroelastomer materials defined by ASTM International standard D1418, and ISO standard 1629. It is commonly called fluorine rubber or fluoro-rubber. FKM is an abbreviation of Fluorine Kautschuk Material. All FKMs contain vinylidene fluoride as a monomer. Originally developed by DuPont (under the brand name Viton, now owned by Chemours), FKMs are today also produced by many companies, including: Daikin (Dai-El), 3M (Dyneon), Solvay S.A. (Tecnoflon), HaloPolymer (Elaftor), Gujarat Fluorochemicals (Fluonox), and several Chinese manufacturers. Fluoroelastomers are more expensive than neoprene or nitrile rubber elastomers. They provide additional heat and chemical resistance. FKMs can be divided into different classes on the basis of either their chemical composition, their fluorine content, or their cross-linking mechanism.
Types:
On the basis of their chemical composition FKMs can be divided into the following types: Type-1 FKMs are composed of vinylidene fluoride (VDF) and hexafluoropropylene (HFP). Copolymers are the standard type of FKMs showing a good overall performance. Their fluorine content is approximately 66 weight percent.
Type-2 FKMs are composed of VDF, HFP, and tetrafluoroethylene (TFE). Terpolymers have a higher fluorine content compared to copolymers (typically between 68 and 69 weight percent fluorine), which results in better chemical and heat resistance. Compression set and low temperature flexibility may be affected negatively.
Type-3 FKMs are composed of VDF, TFE, and perfluoromethylvinylether (PMVE). The addition of PMVE provides better low temperature flexibility compared to copolymers and terpolymers. Typically, the fluorine content of type-3 FKMs ranges from 62 to 68 weight percent.
Type-4 FKMs are composed of propylene, TFE, and VDF. While base resistance is increased in type-4 FKMs, their swelling properties, especially in hydrocarbons, are worsened. Typically, they have a fluorine content of about 67 weight percent.
Type-5 FKMs are composed of VDF, HFP, TFE, PMVE, and ethylene. Known for base resistance and high-temperature resistance to hydrogen sulfide.
Cross-linking mechanisms:
There are three established cross-linking mechanisms used in the curing process of FKMs.
Cross-linking mechanisms:
Diamine cross-linking using a blocked diamine. In the presence of basic (alkaline) media, VDF is vulnerable to dehydrofluorination, which enables the addition of the diamine to the polymer chain. Typically, magnesium oxide is used to neutralize the resulting hydrofluoric acid and rearrange into magnesium fluoride and water. Although rarely used today, diamine curing provides superior rubber-to-metal bonding properties as compared with other cross-linking mechanisms. The diamine's capability to be hydrated makes the diamine cross-link vulnerable in aqueous media.
Cross-linking mechanisms:
Ionic cross-linking (dihydroxy cross-linking) was the next step in curing FKMs. This is today the most common cross-linking chemistry used for FKMs. It provides superior heat resistance, improved hydrolytic stability and better compression set than diamine curing. In contrast to diamine curing, the ionic mechanism is not an addition mechanism but an aromatic nucleophilic substitution. Dihydroxy aromatic compounds are used as the cross-linking agent, and quaternary phosphonium salts are typically used to accelerate the curing process.
Cross-linking mechanisms:
Peroxide cross-linking was originally developed for type 3 FKMs containing PMVE as diamine and bisphenolic cross-linking systems can lead to cleavage in a polymer backbone chain containing PMVE. While diamine and bisphenolic cross-linking are ionic reactions, peroxide cross-linking is a free-radical mechanism. Though peroxide cross-links are not as thermally stable as bisphenolic cross-links, they normally are the system of choice in aqueous media and nonaqueous electrolyte media.
Properties:
Fluoroelastomers provide excellent high temperature (up to 500°F or 260°C) and aggressive fluids resistance when compared with other elastomers, while combining the most effective stability to many sorts of chemicals and fluids such as oil, diesel, ethanol mix or body fluid.The performance of fluoroelastomers in aggressive chemicals depends on the nature of the base polymer and the compounding ingredients used for molding the final products (e.g. o-rings). Some formulations are generally compatible with hydrocarbons, but incompatible with ketones such as acetone and methyl ethyl ketone, ester solvents such as ethyl acetate, amines, and organic acids such as acetic acid.
Properties:
They can be easily distinguished from many other elastomers because of their high density of over 1800 kg/m3, significantly higher than most types of rubber.
Applications:
Because of their outstanding performance they find use in a number of sectors, including the following: Chemical process and petroleum refining, where they are used for seals, pumps, gaskets and so on, due to their resistance to chemicals; Analysis and process instruments: separators, diaphragms, cylindrical fittings, hoops, gaskets, etc.
Applications:
Semiconductor manufacturing; Food and pharmaceutical, because of their low degradation, also in contact with fluids; Aviation and aerospace: high operating temperatures and high altitudes require superior heat and low-temperature resistance.They are suitable for the production of wearables, due to low wear and discoloration even during prolonged lifetimes in contact with skin oils and frequent exposure to light, while guaranteeing high comfort and stain resistance;The automotive industry represents their main application sector, where constant reach for higher efficiencies push manufacturers towards high-performing materials. An example are FKM o-rings used as an upgrade to the original neoprene seals on Corvair pushrod tubes that deteriorated under the high heat produced by the engine, allowing oil leakage. FKM tubing or lined hoses are commonly recommended in automotive and other transportation fuel applications when high concentrations of biodiesel are required. Studies indicate that types B and F (FKM- GBL-S and FKM-GF-S) are more resistant to acidic biodiesel. (This is because biodiesel fuel is unstable and oxidizing.)FKM O-rings have been used safely for some time in SCUBA diving by divers using gas blends referred to as nitrox. FKMs are used because they have a lower probability of catching fire, even with the increased percentages of oxygen found in nitrox. They are also less susceptible to decay under increased oxygen conditions.
Applications:
While these materials have a wide range of applications, their cost is prohibitive when compared to other types of elastomers, meaning that their adoption must be justified by the need for outstanding performance (as in the aerospace sector) and is inadvisable for low-cost products.
FKM/butyl gloves are highly impermeable to many strong organic solvents that would destroy or permeate commonly used gloves (such as those made with nitriles).
Precautions:
At high temperatures or in a fire, fluoroelastomers decompose and may release hydrogen fluoride. Any residue must be handled using protective equipment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Analogy**
Analogy:
Analogy is a comparison or correspondence between two things (or two groups of things) because of a third element that they are considered to share.In logic, it is an inference or an argument from one particular to another particular, as opposed to deduction, induction, and abduction. It is also used of where at least one of the premises, or the conclusion, is general rather than particular in nature. It has the general form A is to B as C is to D.
Analogy:
In a broader sense, analogical reasoning is a cognitive process of transferring some information or meaning of a particular subject (the analog, or source) onto another (the target); and also the linguistic expression corresponding to such a process. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) a similarity, as in the biological notion of analogy.
Analogy:
Analogy plays a significant role in human thought processes. It has been argued that analogy lies at "the core of cognition".
Etymology:
The English word analogy derives from the Latin analogia, itself derived from the Greek ἀναλογία, "proportion", from ana- "upon, according to" [also "against", "anew"] + logos "ratio" [also "word, speech, reckoning"]
Models and theories:
Analogy plays a significant role in problem solving, as well as decision making, argumentation, perception, generalization, memory, creativity, invention, prediction, emotion, explanation, conceptualization and communication. It lies behind basic tasks such as the identification of places, objects and people, for example, in face perception and facial recognition systems. Hofstadter has argued that analogy is "the core of cognition".An analogy is not a figure of speech but a kind of thought. Specific analogical language uses exemplification, comparisons, metaphors, similes, allegories, and parables, but not metonymy. Phrases like and so on, and the like, as if, and the very word like also rely on an analogical understanding by the receiver of a message including them. Analogy is important not only in ordinary language and common sense (where proverbs and idioms give many examples of its application) but also in science, philosophy, law and the humanities. The concepts of association, comparison, correspondence, mathematical and morphological homology, homomorphism, iconicity, isomorphism, metaphor, resemblance, and similarity are closely related to analogy. In cognitive linguistics, the notion of conceptual metaphor may be equivalent to that of analogy. Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans).
Models and theories:
Analogy has been studied and discussed since classical antiquity by philosophers, scientists, theologists and lawyers. The last few decades have shown a renewed interest in analogy, most notably in cognitive science.
Development Aristotle identified analogy in works such as Metaphysics and Nicomachean Ethics Roman lawyers used analogical reasoning and the Greek word analogia.
In Islamic logic, analogical reasoning was used for the process of qiyas in Islamic sharia law and fiqh jurisprudence.
Medieval lawyers distinguished analogia legis and analogia iuris (see below).
The Middle Ages saw an increased use and theorization of analogy.
In Christian scholastic theology, analogical arguments were accepted in order to explain the attributes of God.
Aquinas made a distinction between equivocal, univocal and analogical terms, the last being those like healthy that have different but related meanings. Not only a person can be "healthy", but also the food that is good for health (see the contemporary distinction between polysemy and homonymy).
Models and theories:
Thomas Cajetan wrote an influential treatise on analogy. In all of these cases, the wide Platonic and Aristotelian notion of analogy was preserved.Cajetan named several kinds of analogy that had been used but previously unnamed, particularly: Analogy of attribution (analogia attributionis) or improper proportionality, e.g., "This food is healthy." Analogy of proportionality (analogia proportionalitatis) or proper proportionality, e.g., "2 is to 1 as 4 is to 2", or "the goodness of humans is relative to their essence as the goodness of God is relative to God's essence." Metaphor, e.g., steely determination.
Models and theories:
Identity of relation In ancient Greek the word αναλογια (analogia) originally meant proportionality, in the mathematical sense, and it was indeed sometimes translated to Latin as proportio. Analogy was understood as identity of relation between any two ordered pairs, whether of mathematical nature or not. Analogy and abstraction are different cognitive processes, and analogy is often an easier one. This analogy is not comparing all the properties between a hand and a foot, but rather comparing the relationship between a hand and its palm to a foot and its sole. While a hand and a foot have many dissimilarities, the analogy focuses on their similarity in having an inner surface.
Models and theories:
The same notion of analogy was used in the US-based SAT college admission tests, that included "analogy questions" in the form "A is to B as C is to what?" For example, "Hand is to palm as foot is to ____?" These questions were usually given in the Aristotelian format: HAND : PALM : : FOOT : ____ While most competent English speakers will immediately give the right answer to the analogy question (sole), it is more difficult to identify and describe the exact relation that holds both between pairs such as hand and palm, and between foot and sole. This relation is not apparent in some lexical definitions of palm and sole, where the former is defined as the inner surface of the hand, and the latter as the underside of the foot. Kant's Critique of Judgment held to this notion of analogy, arguing that there can be exactly the same relation between two completely different objects.
Models and theories:
Shared abstraction Greek philosophers such as Plato and Aristotle used a wider notion of analogy. They saw analogy as a shared abstraction. Analogous objects did not share necessarily a relation, but also an idea, a pattern, a regularity, an attribute, an effect or a philosophy. These authors also accepted that comparisons, metaphors and "images" (allegories) could be used as arguments, and sometimes they called them analogies. Analogies should also make those abstractions easier to understand and give confidence to the ones using them.
Models and theories:
James Francis Ross in Portraying Analogy (1982), the first substantive examination of the topic since Cajetan's De Nominum Analogia, demonstrated that analogy is a systematic and universal feature of natural languages, with identifiable and law-like characteristics which explain how the meanings of words in a sentence are interdependent.
Models and theories:
Special case of induction On the contrary, Ibn Taymiyya, Francis Bacon and later John Stuart Mill argued that analogy is simply a special case of induction. In their view analogy is an inductive inference from common known attributes to another probable common attribute, which is known only about the source of the analogy, in the following form: Premises a is C, D, E, F, G b is C, D, E, F Conclusion b is probably G.
Models and theories:
Shared structure Contemporary cognitive scientists use a wide notion of analogy, extensionally close to that of Plato and Aristotle, but framed by Gentner's (1983) structure mapping theory. The same idea of mapping between source and target is used by conceptual metaphor and conceptual blending theorists. Structure mapping theory concerns both psychology and computer science. According to this view, analogy depends on the mapping or alignment of the elements of source and target. The mapping takes place not only between objects, but also between relations of objects and between relations of relations. The whole mapping yields the assignment of a predicate or a relation to the target. Structure mapping theory has been applied and has found considerable confirmation in psychology. It has had reasonable success in computer science and artificial intelligence (see below). Some studies extended the approach to specific subjects, such as metaphor and similarity.
Applications and types:
Logic Logicians analyze how analogical reasoning is used in arguments from analogy.
Applications and types:
An analogy can be stated using is to and as when representing the analogous relationship between two pairs of expressions, for example, "Smile is to mouth, as wink is to eye." In the field of mathematics and logic, this can be formalized with colon notation to represent the relationships, using single colon for ratio, and double colon for equality.In the field of testing, the colon notation of ratios and equality is often borrowed, so that the example above might be rendered, "Smile : mouth :: wink : eye" and pronounced the same way.
Applications and types:
Linguistics An analogy can be the linguistic process that reduces word forms thought to break rules to more common forms that follow these rules. For example, the English verb help once had the preterite (simple past tense in English) holp and the past participle holpen. These old-fashioned forms have been discarded and replaced by helped by using the power of analogy (or by applying the more frequently used Verb-ed rule.) This is called morphological leveling. Analogies can sometimes create rule-breaking forms; one example is the American English past tense form of dive: dove, formed on analogy with words such as drive: drove.
Applications and types:
Neologisms can also be formed by analogy with existing words. A good example is software, formed by analogy with hardware; other analogous neologisms such as firmware and vapourware have followed. Another example is the humorous term underwhelm, formed by analogy with overwhelm.
Applications and types:
Some people present analogy as an alternative to generative rules for explaining the productive formation of structures such as words. Others argue that they are in fact the same and that rules are analogies that have essentially become standard parts of the linguistic system, whereas clearer cases of analogy have simply not (yet) done so (e.g. Langacker 1987.445–447). This view agrees with the current views of analogy in cognitive science which are discussed above.Analogy is also a term used in the Neogrammarian school of thought as a catch-all to describe any morphological change in a language that cannot be explained merely sound change or borrowing.
Applications and types:
Science Analogies are mainly used as a means of creating new ideas and hypotheses, which is called a heuristic function of analogical reasoning.
Applications and types:
Analogical arguments can also be probative, meaning that they serve as a means of proving the rightness of particular theses and theories. This application of analogical reasoning in science is debatable. Analogy can help prove important theories, especially in those kinds of science in which logical or empirical proof is not possible such as theology, philosophy or cosmology when it relates to those areas of the cosmos (the universe) that are beyond any data-based observation and knowledge about them stems from the human insight and thinking outside the senses Analogy may be used to illustrate and teach. To enlighten pupils on the relations that happen between or inside certain things or phenomena, a teacher may refer to other things or phenomena that pupils are more familiar with. It may help in creating or clarifying one theory (theoretical model) via the workings of another theory (theoretical model). Thus it can be used in theoretical and applied sciences in the form of models or simulations which can be considered as strong analogies. Other much weaker analogies assist in understanding and describing functional behaviours of similar systems. For instance, an analogy used in physics textbooks compares electrical circuits to hydraulic circuits. Another example is the analogue ear based on electrical, electronic or mechanical devices.
Applications and types:
Mathematics Some types of analogies can have a precise mathematical formulation through the concept of isomorphism. In detail, this means that if two mathematical structures are of the same type, an analogy between them can be thought of as a bijection which preserves some or all of the relevant structure. For example, R2 and C are isomorphic as vector spaces, but the complex numbers, C , have more structure than R2 does: C is a field as well as a vector space.
Applications and types:
Category theory takes the idea of mathematical analogy much further with the concept of functors. Given two categories C and D, a functor f from C to D can be thought of as an analogy between C and D, because f has to map objects of C to objects of D and arrows of C to arrows of D in such a way that the structure of their respective parts is preserved. This is similar to the structure mapping theory of analogy of Dedre Gentner, because it formalises the idea of analogy as a function which makes certain conditions true.
Applications and types:
Artificial intelligence A computer algorithm has achieved human-level performance on multiple-choice analogy questions from the SAT test. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.The analogical reasoning in the human mind, is free of the false inferences plaguing conventional artificial intelligence models, (called systematicity). Steven Phillips and William H. Wilson use category theory to mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structures transform naturally and reject those that do not.
Applications and types:
Keith Holyoak and Paul Thagard (1997) developed their multiconstraint theory within structure mapping theory. They defend that the "coherence" of an analogy depends on structural consistency, semantic similarity and purpose. Structural consistency is the highest when the analogy is an isomorphism, although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome. Hummel and Holyoak (2005) recast the multiconstraint theory within a neural network architecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are some identical attributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).Mark Keane and Brayshaw (1988) developed their Incremental Analogy Machine (IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series. Empirical evidence shows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together..Eqaan Doug and his team challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line between perception, including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans make representations by selecting relevant information from low-level stimuli. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor. It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.
Applications and types:
Anatomy In anatomy, two anatomical structures are considered to be analogous when they serve similar functions but are not evolutionarily related, such as the legs of vertebrates and the legs of insects. Analogous structures are the result of independent evolution and should be contrasted with structures which shared an evolutionary line.
Engineering Often a physical prototype is built to model and represent some other physical object. For example, wind tunnels are used to test scale models of wings and aircraft which are analogous to (correspond to) full-size wings and aircraft.
For example, the MONIAC (an analogue computer) used the flow of water in its pipes as an analogue to the flow of money in an economy.
Cybernetics Where two or more biological or physical participants meet, they communicate and the stresses produced describe internal models of the participants. Pask in his conversation theory asserts an analogy that describes both similarities and differences between any pair of the participants' internal models or concepts exists.
History In historical science, comparative historical analysis often uses the concept of analogy and analogical reasoning. Recent methods involving calculation operate on large document archives, allowing for analogical or corresponding terms from the past to be found as a response to random questions by users (e.g., Myanmar - Burma) and explained.
Applications and types:
Morality Analogical reasoning plays a very important part in morality. This may be because morality is supposed to be impartial and fair. If it is wrong to do something in a situation A, and situation B corresponds to A in all related features, then it is also wrong to perform that action in situation B. Moral particularism accepts such reasoning, instead of deduction and induction, since only the first can be used regardless of any moral principles.
Applications and types:
Psychology Structure mapping theory Structure mapping, originally proposed by Dedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through, and learning from, analogies. More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual's understanding of a less familiar idea, or a target domain. According to this theory, individuals view their knowledge of ideas, or domains, as interconnected structures. In other words, a domain is viewed as consisting of objects, their properties, and the relationships that characterise their interactions. The process of analogy then involves: Recognising similar structures between the base and target domains.
Applications and types:
Finding deeper similarities by mapping other relationships of a base domain to the target domain.
Applications and types:
Cross-checking those findings against existing knowledge of the target domain.In general, it has been found that people prefer analogies where the two systems correspond highly to each other (e.g. have similar relationships across the domains as opposed to just having similar objects across domains) when these people try to compare and contrast the systems. This is also known as the systematicity principle.An example that has been used to illustrate structure mapping theory comes from Gentner and Gentner (1983) and uses the base domain of flowing water and the target domain of electricity. In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the water towers or hills. This relationship corresponds to that of electricity flowing through a circuit. In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains, would be inferred in the other using analogy.
Applications and types:
Children Children do not always need prompting to make comparisons in order to learn abstract relationships. Eventually, children undergo a relational shift, after which they begin seeing similar relations across different situations instead of merely looking at matching objects. This is critical in their cognitive development as continuing to focus on specific objects would reduce children's ability to learn abstract patterns and reason analogically. Interestingly, some researchers have proposed that children's basic brain functions (i.e., working memory and inhibitory control) do not drive this relational shift. Instead, it is driven by their relational knowledge, such as having labels for the objects that make the relationships clearer(see previous section). However, there is not enough evidence to determine whether the relational shift is actually because basic brain functions become better or relational knowledge becomes deeper.Additionally, research has identified several factors that may increase the likelihood that a child may spontaneously engage in comparison and learn an abstract relationship, without the need for prompts. Comparison is more likely when the objects to be compared are close together in space and/or time, are highly similar (although not so similar that they match, which interfere with identifying relationships), or share common labels.
Applications and types:
Law In law, analogy is primarily used to resolve issues on which there is no previous authority. A distinction can be made between analogical reasoning employed in statutory law and analogical reasoning present in precedential law (case law).
Statutory In statutory law analogy is used in order to fill the so-called lacunas, gaps or loopholes.
Applications and types:
A gap arises when a specific case or legal issue is not clearly dealt with in written law. Then, one may identify a provision required by law which covers the cases that are similar to the case at hand and apply this provision to this case by analogy. Such a gap, in civil law countries, is referred to as a gap extra legem (outside of the law), while analogy which closes it is termed analogy extra legem (outside of the law). The very case at hand is named: an unprovided case.A second gap comes into being when there is a law-controlled provision which applies to the case at hand but this provision leads in this case to an unwanted outcome. Then, one may try to find another law-controlled provision that covers cases similar to the case at hand, using analogy to act upon this provision instead of the provision that applies to it directly. This kind of gap is called a gap contra legem (against the law), while analogy which fills this gap is referred to as analogy contra legem (against the law).A third gap occurs where a law-controlled provision regulates the case at hand, but is unclear or ambiguous. In such circumstances, to decide the case at hand, one may try to find out what this provision means by relying on law-controlled provisions which address cases that are similar to the case at hand or other cases that are regulated by this unclear/ambiguous provision for help. A gap of this type is named gap intra legem (within the law) and analogy which deals with it is referred to as analogy intra legem (within the law). In Equity, the expression infra legem is used (below the law).The similarity upon which law-controlled analogy depends on may depend on the resemblance of raw facts of the cases being compared, the purpose (the so-called ratio legis which is generally the will of the legislature) of a law-controlled provision which is applied by analogy or some other sources.
Applications and types:
Law-controlled analogy may be also based upon more than one statutory provision or even a spirit of law. In the latter case, it is called analogia iuris (from the law in general) as opposed to analogia legis (from a specific legal provision or provisions).
Applications and types:
Case In case law (precedential law), analogies can be drawn from precedent cases. The judge who decides the case at hand may find that the facts of this case are similar to the facts of one of the prior cases to an extent that the outcomes of these cases are treated as the same or similar: stare decesis. Such use of analogy in precedential law is related or connected to the so-called cases of first impression in name, i.e. the cases which have not been regulated by any binding judge's precedent (are not covered by a precedential rule of such a precedent).
Applications and types:
Reasoning from (dis)analogy is also sufficiently employed, while a judge is distinguishing a precedent. That is, upon the discerned differences between the case at hand and the precedential case, a judge rejects to decide the case upon the precedent whose precedential rule embraces the case at hand.
Applications and types:
There is also much room for some other uses of analogy in precedential law. One of them is resort to analogical reasoning, while resolving the conflict between two or more precedents which all apply to the case at hand despite dictating different legal outcome sfor that case. Analogy can also take part in verifying the contents of ratio decidendi, deciding upon precedents that have become irrelevant or quoting precedents form other jurisdictions. It is visible in legal Education, notably in the US (the so-called 'case method').
Applications and types:
Restrictions and Civil Law The law of every jurisdiction is different. In legal matters, sometimes the use of analogy is forbidden (by the very law or common agreement between judges and scholars): the most common instances concern criminal, international, administrative and tax law, especially in jurisdictions which do not have a common law system. For example: Analogy should not be resorted to in criminal matters whenever its outcome would be unfavorable to the accused or suspect. Such a ban finds its footing in the principle: "nullum crimen, nulla poena sine lege", which is understood in the way that there is no crime (punishment) unless it is plainly provided for in a law-controlled provision or an already existing judicial precedent.Analogy should be applied with caution in the domain of tax law. Here, the principle: "nullum tributum sine lege" justifies a general ban on the usage of analogy that would lead to an increase in taxation or whose results would – for some other reason – be harmful to the interests of taxpayers.Extending by analogy those provisions of administrative law that restrict human rights and the rights of the citizens (particularly the category of the so-called "individual rights" or "basic rights") is prohibited in many jurisdictions. Analogy generally should also not be resorted to in order to make the citizen's burdens and obligations larger.The other limitations on the use of analogy in law, among many others, apply to: the analogical extension of statutory provisions that involve exceptions to more general law-controlled regulation or provisions (this restriction flows from the well-known, especially in civil law continental legal systems, Latin maxims: "exceptiones non sunt excendentae", "exception est strictissimae interpretationis" and "singularia non sunt extendenda") the usage of an analogical argument with regard to those law-controlled provisions which comprise lists (enumerations) extending by analogy those law-controlled provisions that give the impression that the Legislator intended to regulate some issues in an exclusive (exhaustive) manner (such a manner is especially implied when the wording of a given statutory provision involves such pointers as: "only", "exclusively", "solely", "always", "never") or which have a plain precise meaning.In civil law jurisdictions, analogy may be permitted or required by law. But also in this branch of law there are some restrictions confining the possible scope of the use of an analogical argument. Such is, for instance, the prohibition to use analogy in relation to provisions regarding time limits or a general ban on the recourse to analogical arguments which lead to extension of those statutory provisions which envisage some obligations or burdens or which order (mandate) something. The other examples concern the usage of analogy in the field of property law, especially when one is going to create some new property rights by it or to extend these statutory provisions whose terms are unambiguous (unequivocal) and plain (clear), e.g.: be of or under a certain age.
Applications and types:
Teaching strategies Analogies as defined in rhetoric are a comparison between words, but an analogy can be used in teaching as well. An analogy as used in teaching would be comparing a topic that students are already familiar with, with a new topic that is being introduced so that students can get a better understanding of the topic and relate back to previous knowledge. Shawn Glynn, a professor in the department of educational psychology and instructional technology at the University of Georgia, developed a theory on teaching with analogies and developed steps to explain the process of teaching with this method. The steps for teaching with analogies are as follows: Step one is introducing the new topic that is about to be taught and giving some general knowledge on the subject. Step two is reviewing the concept that the students already know to ensure they have the proper knowledge to assess the similarities between the two concepts. Step three is finding relevant features within the analogy of the two concepts. Step four is finding similarities between the two concepts so students are able to compare and contrast them in order to understand. Step five is indicating where the analogy breaks down between the two concepts. And finally, step six is drawing a conclusion about the analogy and comparing the new material with the already learned material. Typically this method is used to learn topics in science.In 1989, teacher Kerry Ruef began a program titled The Private Eye Project. It is a method of teaching that revolves around using analogies in the classroom to better explain topics. She thought of the idea to use analogies as a part of curriculum because she was observing objects once and she said, "my mind was noting what else each object reminded me of..." This led her to teach with the question, "what does [the subject or topic] remind you of?" The idea of comparing subjects and concepts led to the development of The Private Eye Project as a method of teaching. The program is designed to build critical thinking skills with analogies as one of the main themes revolving around it. While Glynn focuses on using analogies to teach science, The Private Eye Project can be used for any subject including writing, math, art, social studies, and invention. It is now used by thousands of schools around the country.There are also various teaching innovations now emerging that use sight-based analogies for teaching and research across subjects, for instance between science and the humanities.
Applications and types:
Religion Catholicism The Fourth Lateran Council of 1215 taught: For between creator and creature there can be noted no similarity so great that a greater dissimilarity cannot be seen between them.The theological exploration of this subject is called the analogia entis. The consequence of this theory is that all true statements concerning God (excluding the concrete details of Jesus' earthly life) are rough analogies, without implying any falsehood. Such analogical and true statements would include God is, God is Love, God is a consuming fire, God is near to all who call him, or God as Trinity, where being, love, fire, distance, number must be classed as analogies that allow human cognition of what is infinitely beyond positive or negative language.
Applications and types:
The use of theological statements in syllogisms must take into account their analogical essence, in that every analogy breaks down when stretched beyond its intended meaning.
Applications and types:
Islam Islamic jurisprudence makes ample use of analogy as a means of making conclusions from outside sources of law. The bounds and rules employed to make analogical deduction vary greatly between madhhabs and to a lesser extent individual scholars. It is nonetheless a generally accepted source of law within jurisprudential epistemology, with the chief opposition to it forming the dhahiri (ostensiblist) school. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**N-player game**
N-player game:
In game theory, an n-player game is a game which is well defined for any number of players. This is usually used in contrast to standard 2-player games that are only specified for two players. In defining n-player games, game theorists usually provide a definition that allow for any (finite) number of players. The limiting case of n→∞ is the subject of mean field game theory.Changing games from 2-player games to n-player games entails some concerns. For instance, the Prisoner's dilemma is a 2-player game. One might define an n-player Prisoner's Dilemma where a single defection results everyone else getting the sucker's payoff. Alternatively, it might take certain amount of defection before the cooperators receive the sucker's payoff. (One example of an n-player Prisoner's Dilemma is the Diner's dilemma.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Small cubicuboctahedron**
Small cubicuboctahedron:
In geometry, the small cubicuboctahedron is a uniform star polyhedron, indexed as U13. It has 20 faces (8 triangles, 6 squares, and 6 octagons), 48 edges, and 24 vertices. Its vertex figure is a crossed quadrilateral.
The small cubicuboctahedron is a faceting of the rhombicuboctahedron. Its square faces and its octagonal faces are parallel to those of a cube, while its triangular faces are parallel to those of an octahedron: hence the name cubicuboctahedron. The small suffix serves to distinguish it from the great cubicuboctahedron, which also has faces in the aforementioned directions.
Related polyhedra:
It shares its vertex arrangement with the stellated truncated hexahedron. It additionally shares its edge arrangement with the rhombicuboctahedron (having the triangular faces and 6 square faces in common), and with the small rhombihexahedron (having the octagonal faces in common).
Related tilings:
As the Euler characteristic suggests, the small cubicuboctahedron is a toroidal polyhedron of genus 3 (topologically it is a surface of genus 3), and thus can be interpreted as a (polyhedral) immersion of a genus 3 polyhedral surface, in the complement of its 24 vertices, into 3-space. (A neighborhood of any vertex is topologically a cone on a figure-8, which cannot occur in an immersion. Note that the Richter reference overlooks this fact.) The underlying polyhedron (ignoring self-intersections) defines a uniform tiling of this surface, and so the small cubicuboctahedron is a uniform polyhedron. In the language of abstract polytopes, the small cubicuboctahedron is a faithful realization of this abstract toroidal polyhedron, meaning that it is a nondegenerate polyhedron and that they have the same symmetry group. In fact, every automorphism of the abstract genus 3 surface with this tiling is realized by an isometry of Euclidean space.
Related tilings:
Higher genus surfaces (genus 2 or greater) admit a metric of negative constant curvature (by the uniformization theorem), and the universal cover of the resulting Riemann surface is the hyperbolic plane. The corresponding tiling of the hyperbolic plane has vertex figure 3.8.4.8 (triangle, octagon, square, octagon). If the surface is given the appropriate metric of curvature = −1, the covering map is a local isometry and thus the abstract vertex figure is the same. This tiling may be denoted by the Wythoff symbol 3 4 | 4, and is depicted on the right.
Related tilings:
Alternatively and more subtly, by chopping up each square face into 2 triangles and each octagonal face into 6 triangles, the small cubicuboctahedron can be interpreted as a non-regular coloring of the combinatorially regular (not just uniform) tiling of the genus 3 surface by 56 equilateral triangles, meeting at 24 vertices, each with degree 7. This regular tiling is significant as it is a tiling of the Klein quartic, the genus 3 surface with the most symmetric metric (automorphisms of this tiling equal isometries of the surface), and the orientation-preseserving automorphism group of this surface is isomorphic to the projective special linear group PSL(2,7), equivalently GL(3,2) (the order 168 group of all orientation-preserving isometries). Note that the small cubicuboctahedron is not a realization of this abstract polyhedron, as it only has 24 orientation-preserving symmetries (not every abstract automorphism is realized by a Euclidean isometry) – the isometries of the small cubicuboctahedron preserve not only the triangular tiling, but also the coloring, and hence are a proper subgroup of the full isometry group.
Related tilings:
The corresponding tiling of the hyperbolic plane (the universal covering) is the order-7 triangular tiling. The automorphism group of the Klein quartic can be augmented (by a symmetry which is not realized by a symmetry of the polyhedron, namely "exchanging the two endpoints of the edges that bisect the squares and octahedra) to yield the Mathieu group M24. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrability conditions for differential systems**
Integrability conditions for differential systems:
In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of the way a differential form restricts to a submanifold, and the fact that this restriction is compatible with the exterior derivative. This is one possible approach to certain over-determined systems, for example, including Lax pairs of integrable systems. A Pfaffian system is specified by 1-forms alone, but the theory includes other types of example of differential system. To elaborate, a Pfaffian system is a set of 1-forms on a smooth manifold (which one sets equal to 0 to find solutions to the system).
Integrability conditions for differential systems:
Given a collection of differential 1-forms αi,i=1,2,…,k on an n -dimensional manifold M , an integral manifold is an immersed (not necessarily embedded) submanifold whose tangent space at every point p∈N is annihilated by (the pullback of) each αi A maximal integral manifold is an immersed (not necessarily embedded) submanifold i:N⊂M such that the kernel of the restriction map on forms i∗:Ωp1(M)→Ωp1(N) is spanned by the αi at every point p of N . If in addition the αi are linearly independent, then N is ( n−k )-dimensional.
Integrability conditions for differential systems:
A Pfaffian system is said to be completely integrable if M admits a foliation by maximal integral manifolds. (Note that the foliation need not be regular; i.e. the leaves of the foliation might not be embedded submanifolds.) An integrability condition is a condition on the αi to guarantee that there will be integral submanifolds of sufficiently high dimension.
Necessary and sufficient conditions:
The necessary and sufficient conditions for complete integrability of a Pfaffian system are given by the Frobenius theorem. One version states that if the ideal I algebraically generated by the collection of αi inside the ring Ω(M) is differentially closed, in other words dI⊂I, then the system admits a foliation by maximal integral manifolds. (The converse is obvious from the definitions.)
Example of a non-integrable system:
Not every Pfaffian system is completely integrable in the Frobenius sense. For example, consider the following one-form on R3 − (0,0,0): θ=zdx+xdy+ydz.
If dθ were in the ideal generated by θ we would have, by the skewness of the wedge product 0.
But a direct calculation gives θ∧dθ=(x+y+z)dx∧dy∧dz which is a nonzero multiple of the standard volume form on R3. Therefore, there are no two-dimensional leaves, and the system is not completely integrable.
On the other hand, for the curve defined by x=t,y=c,z=e−t/c,t>0 then θ defined as above is 0, and hence the curve is easily verified to be a solution (i.e. an integral curve) for the above Pfaffian system for any nonzero constant c.
Examples of applications:
In Riemannian geometry, we may consider the problem of finding an orthogonal coframe θi, i.e., a collection of 1-forms forming a basis of the cotangent space at every point with ⟨θi,θj⟩=δij which are closed (dθi = 0, i = 1, 2, ..., n). By the Poincaré lemma, the θi locally will have the form dxi for some functions xi on the manifold, and thus provide an isometry of an open subset of M with an open subset of Rn. Such a manifold is called locally flat.
Examples of applications:
This problem reduces to a question on the coframe bundle of M. Suppose we had such a closed coframe Θ=(θ1,…,θn).
If we had another coframe Φ=(ϕ1,…,ϕn) , then the two coframes would be related by an orthogonal transformation Φ=MΘ If the connection 1-form is ω, then we have dΦ=ω∧Φ On the other hand, dΦ=(dM)∧Θ+M∧dΘ=(dM)∧Θ=(dM)M−1∧Φ.
But ω=(dM)M−1 is the Maurer–Cartan form for the orthogonal group. Therefore, it obeys the structural equation dω+ω∧ω=0, and this is just the curvature of M: 0.
After an application of the Frobenius theorem, one concludes that a manifold M is locally flat if and only if its curvature vanishes.
Generalizations:
Many generalizations exist to integrability conditions on differential systems which are not necessarily generated by one-forms. The most famous of these are the Cartan–Kähler theorem, which only works for real analytic differential systems, and the Cartan–Kuranishi prolongation theorem. See Further reading for details. The Newlander-Nirenberg theorem gives integrability conditions for an almost-complex structure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Poll (Unix)**
Poll (Unix):
poll is a POSIX system call to wait for one or more file descriptors to become ready for use.On *BSD and macOS, it has been largely superseded by kqueue in high performance applications. On Linux, it has been superseded by ppoll and epoll. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ISO 31-13**
ISO 31-13:
ISO 31 (Quantities and units, International Organization for Standardization, 1992) is a superseded international standard concerning physical quantities, units of measurement, their interrelationships and their presentation. It was revised and replaced by ISO/IEC 80000.
Parts:
The standard comes in 14 parts: ISO 31-0: General principles (replaced by ISO/IEC 80000-1:2009) ISO 31-1: Space and time (replaced by ISO/IEC 80000-3:2007) ISO 31-2: Periodic and related phenomena (replaced by ISO/IEC 80000-3:2007) ISO 31-3: Mechanics (replaced by ISO/IEC 80000-4:2006) ISO 31-4: Heat (replaced by ISO/IEC 80000-5) ISO 31-5: Electricity and magnetism (replaced by ISO/IEC 80000-6) ISO 31-6: Light and related electromagnetic radiations (replaced by ISO/IEC 80000-7) ISO 31-7: Acoustics (replaced by ISO/IEC 80000-8:2007) ISO 31-8: Physical chemistry and molecular physics (replaced by ISO/IEC 80000-9) ISO 31-9: Atomic and nuclear physics (replaced by ISO/IEC 80000-10) ISO 31-10: Nuclear reactions and ionizing radiations (replaced by ISO/IEC 80000-10) ISO 31-11: Mathematical signs and symbols for use in the physical sciences and technology (replaced by ISO 80000-2:2009) ISO 31-12: Characteristic numbers (replaced by ISO/IEC 80000-11) ISO 31-13: Solid state physics (replaced by ISO/IEC 80000-12)A second international standard on quantities and units was IEC 60027. The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration ([1], [2]) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027.
Coined words:
ISO 31-0 introduced several new words into the English language that are direct spelling-calques from the French. Some of these words have been used in scientific literature.
Related national standards:
Canada: CAN/CSA-Z234-1-89 Canadian Metric Practice Guide (covers some aspects of ISO 31-0, but is not a comprehensive list of physical quantities comparable to ISO 31) United States: There are several national SI guidance documents, such as NIST SP 811, NIST SP 330, NIST SP 814, IEEE/ASTM SI 10, SAE J916. These cover many aspects of the ISO 31-0 standard, but lack the comprehensive list of quantities and units defined in the remaining parts of ISO 31. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tetra-tert-butylethylene**
Tetra-tert-butylethylene:
Tetra-tert-butylethylene is a hypothetical organic compound, a hydrocarbon with formula C18H36, or ((H3C−)3C−)2C=C(−C(−CH3)3)2. As the name indicates, its molecular structure can be viewed as an ethylene molecule H2C=CH2 with the four hydrogens replaced by tert-butyl −C(−CH3)3 groups.
As of 2006, this compound had not yet been synthesized, in spite of many efforts. It is of interest in chemical research as an alkene whose double bond is strained but protected by steric hindrance. Theoretical studies indicate that the molecule should be stable, with a strain energy of about 93 kcal/mol (390 kJ/mol). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vertical circle**
Vertical circle:
In astronomy, a vertical circle is a great circle on the celestial sphere that is perpendicular to the horizon. Therefore, it contains the vertical direction, passing through the zenith and the nadir. There is a vertical circle for any given azimuth, where azimuth is the angle measured east from the north on the celestial horizon. The vertical circle which is on the east–west direction is called the prime vertical. The vertical circle which is on the north–south direction is called the local celestial meridian (LCM), or principal vertical. Vertical circles are part of the horizontal coordinate system.Instruments like this were more common in 19th century observatories and were important for locating and recording coordinates in the cosmos, and observatories often had various other instruments for certain functions as well as advanced clocks of the period. The popularly known example in the observatories, were the Great refractors which became larger and larger and came to have dominating effect to the point that observatories were moved simply to have better conditions for their biggest telescope, in the modern style where observatories often have one instrument only in a remote location on the Earth or even in outer space. However, in the 19th century it was more basic with observatorys often making recording of coordinates of different items and to determine the shape of the Earth and times. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Feelix Growing**
Feelix Growing:
Feelix Growing is a research project, started on December 1, 2006, that is working to design robots that can detect and respond to human emotional cues. The project involves six countries and 25 roboticists, developmental psychologists and neuroscientists.The aim of the project was to build robots that "learn from humans and respond in a socially and emotionally appropriate manner". The robots are designed to respond to emotional cues from humans and use them to adapt their own behavior. The project designers wanted to facilitate integration of robots into human society so that they could more easily provide services. The project aims to create robots that can "recognize" a given emotion, such as anger or fear, in a human, and adapt its behavior to the most appropriate response after repeated interactions. Thus the project emphasizes development over time.
Feelix Growing:
Robots are expected to be able to read emotions by picking up on physical cues like movement of body and facial muscles, posture, speed of movement, eyebrow position, and distance between the human and the robot. Project participants want to design the robots to detect those emotional cues that are universal to people, rather than those specific to individuals and cultures.The robots are made not only to detect emotions in people but also to have their own. According to Dr. Lola Cañamero, who is running the project, "Emotions foster adaptation to environment, so robots would be better at learning things. For example, anything that damages the body would be painful, so a robot would learn not to do it again." Cañamero says that the robots will be given the equivalent of a system of pleasure and pain.The robots will have artificial neural networks. Rather than building complex hardware, the project coordinators plan to focus on designing software and to use mostly "off the shelf" hardware that is already available. The only part they plan to build themselves are heads with artificial faces capable of forming facial expressions.The scheme for 2.5 million euros is financed by the European Commission (the executive body of the European Union) and is set to last for three years. Project participants hope to have a model of robot that can be used in homes and hospitals by the scheduled end date of the project.
Feelix Growing:
The name Feelix is derived from the words feel, interact, and express. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SOL3**
SOL3:
SOL3 is an American footwear company that designs and produces shoe accessories, which range from manufacturing insoles such as shoe lifts and crease protectors to various basic foot care items including laces, deodorizer, socks and sneaker cleaner. Based in Philadelphia and founded as of late-2016, SOL3 launched the original ternary adjustable height insoles that can be modified to increase elevation in footwear from 1 to 2.36 inches. The company first began distributing and shipping product globally in October 2016. As of December 2017, SOL3 sold an estimated 50,000 units of the 3-Level Insole after its first year of operation. In 2018, SOL3 was named “Footwear Accessory of the Year” by Sneakscore. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FuncJS**
FuncJS:
FuncJS is an open-source functions only JavaScript library that aims to speed up web development time in the JavaScript programming language by providing 22 pre-written functions made available throughout both releases. FuncJS is intended to allow JavaScript programmers to get the basic areas completed quickly, and let them focus on more larger, code-centric areas of development. As of November 2012, FuncJS is a relatively new product and is in version one of its stable release.
Functions:
In January 2013, FuncJS 2 was released which got rid of many functions, and renamed some functions. However, the new files were uploaded onto GitHub, but it's a temporary action, as according to the repository's README file. Here's the new list, as according to the new documentation: echo() function_exists() strlen() strpos() str_replace() up() down() store() str_rev() grab() trim() count() strip_tags() show_tags()
Importing FuncJS to webpages:
FuncJS is available in two versions; minified (compressed) and an uncompressed version, both of which are to be used separately from each other. Similar to other JavaScript libraries, FuncJS can be imported onto a webpage by including it via the "script" HTML tag: According to the documentation, users should make sure FuncJS is loaded and working properly on their webpages by checking whether the browser recognises the FuncJS object: Both versions of FuncJS are hosted by FuncJS itself, as demo's suggest that FuncJS is only available through their servers. However, although FuncJS allows users to download a local copy of the file to their own machines, it deeply encourages that users import FuncJS into their webpages through linking it via a URL as "this ensures that you (the user) have any new updates to the file made available to you."
Using FuncJS in webpages:
Since function's from FuncJS are seen by the browser as regular functions, they would have to be written within "script" tags and would be checked and executed by the browser's JavaScript engine (such as Google Chrome's V8 JavaScript Engine). As seen by the documentation's, function's from FuncJS are designed to fit into normal JavaScript code, therefore not breaking the "flow" of writing JavaScript code. Take this example (from the documentation website): which checks a given condition, and displays text depending on the outcome. As seen by the example, the "echo" function can be considered as a part of JavaScript, similar to PHP. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Journal of Computer Graphics Techniques**
Journal of Computer Graphics Techniques:
The Journal of Computer Graphics Techniques is a diamond open-access peer-reviewed scientific journal covering computer graphics. It was established in May 2012 when a large part of the editorial board resigned from the now-defunct Journal of Graphics Tools. The editor-in-chief is Marc Olano (University of Maryland, Baltimore County).
Journal of Computer Graphics Techniques:
The Journal of Graphics Tools was a quarterly peer-reviewed scientific journal covering computer graphics. It was established in 1996 and published by A K Peters, now part of Taylor & Francis. From 2009 to 2011 the journal was published as the Journal of Graphics, GPU, & Game Tools. In 2012, a large part of the editorial board resigned to form the open access Journal of Computer Graphics Techniques. The last editor-in-chief was Francesco Banterle (Istituto di Scienza e Tecnologie dell'Informazione). Previous editors-in-chief have been Andrew Glassner, Ronen Barzel, Doug Roble, and Morgan McGuire. The final volume was released in 2013 and the journal formally ceased with its final issue in 2015. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Importin subunit alpha-4**
Importin subunit alpha-4:
Importin subunit alpha-4 also known as karyopherin subunit alpha-3 is a protein that in humans is encoded by the KPNA3 gene.The transport of molecules between the nucleus and the cytoplasm in eukaryotic cells is mediated by the nuclear pore complex (NPC) which consists of 60–100 proteins and is probably 120 million daltons in molecular size. Small molecules (up to 70 kD) can pass through the nuclear pore by nonselective diffusion; larger molecules are transported by an active process. Most nuclear proteins contain short basic amino acid sequences known as nuclear localization signals (NLSs). KPNA3, encodes a protein similar to certain nuclear transport proteins of Xenopus and human. The predicted amino acid sequence shows similarity to Xenopus importin, yeast SRP1, and human RCH1 (KPNA2), respectively. The similarities among these proteins suggests that karyopherin alpha-3 may be involved in the nuclear transport system.
Interactions:
KPNA3 has been shown to interact with KPNB1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Free-space optical communication**
Free-space optical communication:
Free-space optical communication (FSO) is an optical communication technology that uses light propagating in free space to wirelessly transmit data for telecommunications or computer networking. "Free space" means air, outer space, vacuum, or something similar. This contrasts with using solids such as optical fiber cable.
The technology is useful where the physical connections are impractical due to high costs or other considerations.
History:
Optical communications, in various forms, have been used for thousands of years. The ancient Greeks used a coded alphabetic system of signalling with torches developed by Cleoxenus, Democleitus and Polybius. In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients.
History:
In 1880, Alexander Graham Bell and his assistant Charles Sumner Tainter created the photophone, at Bell's newly established Volta Laboratory in Washington, DC. Bell considered it his most important invention. The device allowed for the transmission of sound on a beam of light. On June 3, 1880, Bell conducted the world's first wireless telephone transmission between two buildings, some 213 meters (700 feet) apart.Its first practical use came in military communication systems many decades later, first for optical telegraphy. German colonial troops used heliograph telegraphy transmitters during the Herero and Namaqua genocide starting in 1904, in German South-West Africa (today's Namibia) as did British, French, US or Ottoman signals.
History:
During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters called Blinkgerät, the intermediate type for distances of up to 4 km (2.5 miles) at daylight and of up to 8 km (5 miles) at night, using red filters for undetected communications. Optical telephone communications were tested at the end of the war, but not introduced at troop level. In addition, special blinkgeräts were used for communication with airplanes, balloons, and tanks, with varying success.A major technological step was to replace the Morse code by modulating optical waves in speech transmission. Carl Zeiss, Jena developed the Lichtsprechgerät 80/80 (literal translation: optical speaking device) that the German army used in their World War II anti-aircraft defense units, or in bunkers at the Atlantic Wall.The invention of lasers in the 1960s revolutionized free-space optics. Military organizations were particularly interested and boosted their development. However, the technology lost market momentum when the installation of optical fiber networks for civilian uses was at its peak.
History:
Many simple and inexpensive consumer remote controls use low-speed communication using infrared (IR) light. This is known as consumer IR technologies.
Usage and technologies:
Free-space point-to-point optical links can be implemented using infrared laser light, although low-data-rate communication over short distances is possible using LEDs. Infrared Data Association (IrDA) technology is a very simple form of free-space optical communications. On the communications side the FSO technology is considered as a part of the optical wireless communications applications. Free-space optics can be used for communications between spacecraft.
Usage and technologies:
Useful distances The reliability of FSO units has always been a problem for commercial telecommunications. Consistently, studies find too many dropped packets and signal errors over small ranges (400 to 500 metres (1,300 to 1,600 ft)). This is from both independent studies, such as in the Czech Republic, as well as formal internal nationwide studies, such as one conducted by MRV FSO staff. Military based studies consistently produce longer estimates for reliability, projecting the maximum range for terrestrial links is of the order of 2 to 3 km (1.2 to 1.9 mi). All studies agree the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. Relays may be employed to extend the range for FSO communications.
Usage and technologies:
Extending the useful distance The main reason terrestrial communications have been limited to non-commercial telecommunications functions is fog. Fog consistently keeps FSO laser links over 500 metres (1,600 ft) from achieving a year-round bit error rate of 1 per 100,000. Several entities are continually attempting to overcome these key disadvantages to FSO communications and field a system with a better quality of service. DARPA has sponsored over US$130 million in research toward this effort, with the ORCA and ORCLE programs.Other non-government groups are fielding tests to evaluate different technologies that some claim have the ability to address key FSO adoption challenges. As of October 2014, none have fielded a working system that addresses the most common atmospheric events.
Usage and technologies:
FSO research from 1998–2006 in the private sector totaled $407.1 million, divided primarily among four start-up companies. All four failed to deliver products that would meet telecommunications quality and distance standards: Terabeam received approximately $575 million in funding from investors such as Softbank, Mobius Venture Capital and Oakhill Venture Partners. AT&T and Lucent backed this attempt. The work ultimately failed, and the company was purchased in 2004 for $52 million (excluding warrants and options) by Falls Church, Va.-based YDI, effective June 22, 2004, and used the name Terabeam for the new entity. On September 4, 2007, Terabeam (then headquartered in San Jose, California) announced it would change its name to Proxim Wireless Corporation, and change its NASDAQ stock symbol from TRBM to PRXM.
Usage and technologies:
AirFiber received $96.1 million in funding, and never solved the weather issue. They sold out to MRV communications in 2003, and MRV sold their FSO units until 2012 when the end-of-life was abruptly announced for the Terescope series.
LightPointe Communications received $76 million in start-up funds, and eventually reorganized to sell hybrid FSO-RF units to overcome the weather-based challenges.
The Maxima Corporation published its operating theory in Science, and received $9 million in funding before permanently shutting down. No known spin-off or purchase followed this effort.
Usage and technologies:
Wireless Excellence developed and launched CableFree UNITY solutions that combine FSO with millimeter wave and radio technologies to extend distance, capacity and availability, with a goal of making FSO a more useful and practical technology.One private company published a paper on November 20, 2014, claiming they had achieved commercial reliability (99.999% availability) in extreme fog. There is no indication this product is currently commercially available.
Usage and technologies:
Extraterrestrial The massive advantages of laser communication in space have multiple space agencies racing to develop a stable space communication platform, with many significant demonstrations and achievements.
Operational systems The first gigabit laser-based communication was achieved by the European Space Agency and called the European Data Relay System (EDRS) on November 28, 2014. The system is operational and is being used on a daily basis.
Demonstrations NASA's OPALS announced a breakthrough in space-to-ground communication December 9, 2014, uploading 175 megabytes in 3.5 seconds. Their system is also able to re-acquire tracking after the signal was lost due to cloud cover.
Usage and technologies:
In the early morning hours of Oct. 18, 2013, NASA's Lunar Laser Communication Demonstration (LLCD) made history, transmitting data from lunar orbit to Earth at a rate of 622 megabits per second (Mbit/s). LLCD was flown aboard the Lunar Atmosphere and Dust Environment Explorer satellite (LADEE), whose primary science mission was to investigate the tenuous and exotic atmosphere that exists around the moon.
Usage and technologies:
In January 2013, NASA used lasers to beam an image of the Mona Lisa to the Lunar Reconnaissance Orbiter roughly 390,000 km (240,000 mi) away. To compensate for atmospheric interference, an error correction code algorithm similar to that used in CDs was implemented.On Dec. 7, 2021, Laser Communications Relay Demonstration (LCRD), another NASA project aimed to relay data between spacecraft and ground stations, launched from the Cape Canaveral Space Force Station in Florida. LCRD is NASA’s first two-way, end-to-end optical relay. One of LCRD’s first operational users will be the Integrated LCRD Low-Earth Orbit User Modem and Amplifier Terminal (ILLUMA-T), a payload that will be hosted on the International Space Station. The terminal will receive high-resolution science data from experiments and instruments onboard the space station and then transfer this data to LCRD, which will then transmit it to a ground station. After the data arrives on Earth, it will be delivered to mission operation centers and mission scientists.
Usage and technologies:
A two-way distance record for communication was set by the Mercury laser altimeter instrument aboard the MESSENGER spacecraft, and was able to communicate across a distance of 24 million km (15 million miles), as the craft neared Earth on a fly-by in May, 2005. The previous record had been set with a one-way detection of laser light from Earth, by the Galileo probe, of 6 million km (3.7 million mi) in 1992. Quote from Laser Communication in Space Demonstrations (EDRS) Commercial use Various satellite constellations that are intended to provide global broadband coverage, such as SpaceX Starlink, employ laser communication for inter-satellite links. This effectively creates a space-based optical mesh network between the satellites.
LEDs:
In 2001, Twibright Labs released RONJA Metropolis, an open source DIY 10 Mbit/s full duplex LED FSO over 1.4 km (0.87 mi).In 2004, a Visible Light Communication Consortium was formed in Japan. This was based on work from researchers that used a white LED-based space lighting system for indoor local area network (LAN) communications. These systems present advantages over traditional UHF RF-based systems from improved isolation between systems, the size and cost of receivers/transmitters, RF licensing laws and by combining space lighting and communication into the same system. In January 2009, a task force for visible light communication was formed by the Institute of Electrical and Electronics Engineers working group for wireless personal area network standards known as IEEE 802.15.7. A trial was announced in 2010, in St. Cloud, Minnesota.Amateur radio operators have achieved significantly farther distances using incoherent sources of light from high-intensity LEDs. One reported 278 km (173 mi) in 2007. However, physical limitations of the equipment used limited bandwidths to about 4 kHz. The high sensitivities required of the detector to cover such distances made the internal capacitance of the photodiode used a dominant factor in the high-impedance amplifier which followed it, thus naturally forming a low-pass filter with a cut-off frequency in the 4 kHz range. Lasers can reach very high data rates which are comparable to fiber communications.
LEDs:
Projected data rates and future data rate claims vary. A low-cost white LED (GaN-phosphor) which could be used for space lighting can typically be modulated up to 20 MHz. Data rates of over 100 Mbit/s can be easily achieved using efficient modulation schemes and Siemens claimed to have achieved over 500 Mbit/s in 2010. Research published in 2009, used a similar system for traffic control of automated vehicles with LED traffic lights.In September 2013, pureLiFi, the Edinburgh start-up working on Li-Fi, also demonstrated high speed point-to-point connectivity using any off-the-shelf LED light bulb. In previous work, high bandwidth specialist LEDs have been used to achieve the high data rates. The new system, the Li-1st, maximizes the available optical bandwidth for any LED device, thereby reducing the cost and improving the performance of deploying indoor FSO systems.
Engineering details:
Typically, the best scenarios for using this technology are: LAN-to-LAN connections on campuses at Fast Ethernet or Gigabit Ethernet speeds LAN-to-LAN connections in a city, a metropolitan area network To cross a public road or other barriers which the sender and receiver do not own Speedy service delivery of high-bandwidth access to optical fiber networks Converged voice-data connection Temporary network installation (for events or other purposes) Reestablish high-speed connection quickly (disaster recovery) As an alternative or upgrade add-on to existing wireless technologies Especially powerful in combination with auto aiming systems, to power moving cars or a laptop while moving. or to use auto-aiming nodes to create a network with other nodes.
Engineering details:
As a safety add-on for important fiber connections (redundancy) For communications between spacecraft, including elements of a satellite constellation For inter- and intra-chip communicationThe light beam can be very narrow, which makes FSO hard to intercept, improving security. It is comparatively easy to encrypt any data traveling across the FSO connection for additional security. FSO provides vastly improved electromagnetic interference (EMI) behavior compared to using microwaves.
Engineering details:
Technical advantages Ease of deployment Can be used to power devices License-free long-range operation (in contrast with radio communication) High bit rates Low bit error rates Immunity to electromagnetic interference Full-duplex operation Protocol transparency Increased security when working with narrow beam(s) No Fresnel zone necessary Reference open source implementation Reduced size, weight, and power consumption compared to RF antennas Range-limiting factors For terrestrial applications, the principal limiting factors are: Fog (10 to ~100 dB/km attenuation) Beam dispersion Atmospheric absorption Rain Snow Terrestrial scintillation Interference from background light sources (including the sun) Shadowing Pointing stability in wind Pollution, such as smogThese factors cause an attenuated receiver signal and lead to higher bit error ratio (BER). To overcome these issues, vendors found some solutions, like multi-beam or multi-path architectures, which use more than one sender and more than one receiver. Some state-of-the-art devices also have larger fade margin (extra power, reserved for rain, smog, fog). To keep an eye-safe environment, good FSO systems have a limited laser power density and support laser classes 1 or 1M. Atmospheric and fog attenuation, which are exponential in nature, limit practical range of FSO devices to several kilometres. However, free-space optics based on 1550 nm wavelength, have considerably lower optical loss than free-space optics using 830 nm wavelength, in dense fog conditions. FSO using wavelength 1550 nm system are capable of transmitting several times higher power than systems with 850 nm and are safe to the human eye (1M class). Additionally, some free-space optics, such as EC SYSTEM, ensure higher connection reliability in bad weather conditions by constantly monitoring link quality to regulate laser diode transmission power with built-in automatic gain control. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Constant curvature**
Constant curvature:
In mathematics, constant curvature is a concept from differential geometry. Here, curvature refers to the sectional curvature of a space (more precisely a manifold) and is a single number determining its local geometry. The sectional curvature is said to be constant if it has the same value at every point and for every two-dimensional tangent plane at that point. For example, a sphere is a surface of constant positive curvature.
Classification:
The Riemannian manifolds of constant curvature can be classified into the following three cases: elliptic geometry – constant positive sectional curvature Euclidean geometry – constant vanishing sectional curvature hyperbolic geometry – constant negative sectional curvature.
Properties:
Every space of constant curvature is locally symmetric, i.e. its curvature tensor is parallel ∇R=0 Every space of constant curvature is locally maximally symmetric, i.e. it has 12n(n+1) number of local isometries, where n is its dimension.
Conversely, there exists a similar but stronger statement: every maximally symmetric space, i.e. a space which has 12n(n+1) (global) isometries, has constant curvature.
Properties:
(Killing–Hopf theorem) The universal cover of a manifold of constant sectional curvature is one of the model spaces: sphere (sectional curvature positive) plane (sectional curvature zero) hyperbolic manifold (sectional curvature negative) A space of constant curvature which is geodesically complete is called space form and the study of space forms is intimately related to generalized crystallography (see the article on space form for more details).
Properties:
Two space forms are isomorphic if and only if they have the same dimension, their metrics possess the same signature and their sectional curvatures are equal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Azane**
Azane:
Azanes are acyclic, saturated hydronitrogens, which means that they consist only of hydrogen and nitrogen atoms and all bonds are single bonds. They are therefore pnictogen hydrides. Because cyclic hydronitrogens are excluded by definition, the azanes comprise a homologous series of inorganic compounds with the general chemical formula NnHn+2. Each nitrogen atom has three bonds (either N-H or N-N bonds), and each hydrogen atom is joined to a nitrogen atom (H-N bonds). A series of linked nitrogen atoms is known as the nitrogen skeleton or nitrogen backbone. The number of nitrogen atoms is used to define the size of the azane (e.g. N2-azane).
Azane:
The simplest possible azane (the parent molecule) is ammonia, NH3. There is no limit to the number of nitrogen atoms that can be linked together, the only limitation being that the molecule is acyclic, is saturated, and is a hydronitrogen.
Azanes are reactive and have significant biological activity. Azanes can be viewed as a more biologically active or reactive portion (functional groups) of the molecule, which can be hung upon molecular trees.
Structure classification:
Saturated hydronitrogens can be: linear (general formula NnHn + 2) wherein the nitrogen atoms are joined in a snakelike structure branched (general formula NnHn + 2, n > 3) wherein the nitrogen backbone splits off in one or more directions cyclic (general formula NnHn, n > 2) wherein the nitrogen backbone is linked so as to form a loop.According to IUPAC definitions, the former two are azanes, whereas the third group is called cycloazanes. Saturated hydronitrogens can also combine any of the linear, cyclic (e.g. polycyclic), and branching structures, and they are still azanes (no general formula) as long as they are acyclic (i.e., having no loops). They also have single covalent bonds between their nitrogens.
Isomerism:
Azanes with more than three nitrogen atoms can be arranged in various different ways, forming structural isomers. The simplest isomer of an azane is the one in which the nitrogen atoms are arranged in a single chain with no branches. This isomer is sometimes called the n-isomer (n for "normal", although it is not necessarily the most common). However the chain of nitrogen atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of nitrogen atoms.
Isomerism:
Due to the low energy of inversion, unsubstituted branched azanes cannot be chiral. In addition to these isomers, the chain of nitrogen atoms may form one or more loops. Such compounds are called cycloazanes.
Nomenclature:
The IUPAC nomenclature systematically naming nitrogen compounds by identifying hydronitrogen chains, analogous to the alkane nomenclature. Unbranched, saturated hydronitrogen chains are named with a Greek numerical prefix for the number of nitrogens and the suffix "-azane" for hydronitrogens with single bonds, or "-azene" for those with double bonds.
Linear azanes Straight-chain azanes are sometimes indicated by the prefix n- (for normal) where a non-linear isomer exists. Although this is not strictly necessary, the usage is common in cases where there is an important difference in properties between the straight-chain and branched-chain isomers.
Nomenclature:
The members of the series (in terms of number of nitrogen atoms) are named as follows: ammonia, NH3 - one nitrogen and three hydrogen diazane (or hydrazine), N2H4 - two nitrogen and four hydrogen triazane, N3H5 - three nitrogen and five hydrogenAzanes with three or more nitrogen atoms are named by adding the suffix -azane to the appropriate numerical multiplier prefix. Hence, triazane, N3H5; tetrazane or tetraazane, N4H6; pentazane or pentaazane, N5H7; hexazane or hexaazane, N6H8; etc. The prefix is generally Greek, with the exceptions of nonaazane which has a Latin prefix, and undecaazane and tridecaazane which have mixed-language prefixes.
Hazards:
Ammonia is explosive when mixed with air (15 – 25%). Other lower azanes can also form explosive mixtures with air. The lighter liquid azanes are highly flammable; this risk increases with the length of the nitrogen chain. One consideration for detection and risk control is that ammonia is lighter than air, creating the possibility of accumulation on ceilings.
Related and derived hydronitrogens:
Related to the azanes are a homologous series of functional groups, side-chains, or radicals with the general chemical formula NnHn+1. Examples include azanyl (NH2) and hydrazinyl. This group is generally abbreviated with the symbol N. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Relative Gain Array**
Relative Gain Array:
The Relative Gain Array (RGA) is a classical widely-used method for determining the best input-output pairings for multivariable process control systems. It has many practical open-loop and closed-loop control applications and is relevant to analyzing many fundamental steady-state closed-loop system properties such as stability and robustness.
Definition:
Given a linear time-invariant (LTI) system represented by a nonsingular matrix G , the relative gain array (RGA) is defined as R=Φ(G)=G∘(G−1)T.
where ∘ is the elementwise Hadamard product of the two matrices, and the transpose operator (no conjugate) is necessary even for complex G . Each i,j element Ri,j gives a scale invariant (unit-invariant) measure of the dependence of output j on input i
Properties:
The following are some of the linear-algebra properties of the RGA: Each row and column of Φ(G) sums to 1.
Properties:
For nonsingular diagonal matrices D and E , Φ(G)=Φ(DGE) For permutation matrices P and Q , PΦ(G)Q=Φ(PGQ) Lastly, Φ(G−1)=Φ(G)T=Φ(GT) .The second property says that the RGA is invariant with respect to nonzero scalings of the rows and columns of G , which is why the RGA is invariant with respect to the choice of units on different input and output variables. The third property says that the RGA is consistent with respect to permutations of the rows or columns of G
Generalizations:
The RGA is often generalized in practice to be used when G is singular, e.g., non-square, by replacing the inverse of G with its Moore–Penrose inverse (pseudoinverse). However, it has been shown that the Moore–Penrose pseudoinverse fails to preserve the critical scale-invariance property of the RGA (#2 above) and that the unit-consistent (UC) generalized inverse must therefore be used. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Polytomous choice**
Polytomous choice:
In economics, polytomous choice is a setting (model) with more than two choices; contrast to dichotomous choice. The use of the term polychotomous is also in common usage in the prior research literature; however, polytomous is the more technically correct spelling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truncated order-4 hexagonal tiling**
Truncated order-4 hexagonal tiling:
In geometry, the truncated order-4 hexagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t{6,4}. A secondary construction tr{6,6} is called a truncated hexahexagonal tiling with two colors of dodecagons.
Constructions:
There are two uniform constructions of this tiling, first from [6,4] kaleidoscope, and a lower symmetry by removing the last mirror, [6,4,1+], gives [6,6], (*662).
Related polyhedra and tiling:
Symmetry The dual of the tiling represents the fundamental domains of (*662) orbifold symmetry. From [6,6] (*662) symmetry, there are 15 small index subgroup (12 unique) by mirror removal and alternation operators. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The subgroup index-8 group, [1+,6,1+,6,1+] (3333) is the commutator subgroup of [6,6].
Related polyhedra and tiling:
Larger subgroup constructed as [6,6*], removing the gyration points of (6*3), index 12 becomes (*333333).
The symmetry can be doubled to 642 symmetry by adding a mirror to bisect the fundamental domain. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**FOXL2**
FOXL2:
Forkhead box protein L2 is a protein that in humans is encoded by the FOXL2 gene.
Function:
FOXL2 (OMIM 605597) is a transcription factor belonging to the forkhead box (FOX) superfamily, characterized by the forkhead box/winged-helix DNA-binding domain. FOXL2 plays an important role in ovarian development and function. In postnatal ovaries FOXL2 regulates granulosa cell differentiation and supports the growth of the pre-ovulatory follicles during adult life.
In addition, the FOXL2 protein will prevent the formation of testes by suppressing expression of SOX9. In mice, FOXL2 is also expressed in pituitary cells where it is required for FSH expression.
Regulation:
FOXL2 has several post-translational modifications that modulate its stability, subcellular localization and pro-apoptotic activity. By a yeast-two-hybrid screening, 10 novel protein partners of FOXL2 were discovered. The interactions were confirmed by co-immunoprecipitation experiments between FOXL2 and CXXC4 (IDAX), CXXC5 (RINF/WID), CREM, GMEB1 (P96PIF), NR2C1 (TR2), SP100, RPLP1, BAF (BANF1), XRCC6 (KU70) and SIRT1.
Clinical significance:
Sex determination FOXL2 is involved in sex determination. FOXL2 knockout in mature mouse ovaries appears to cause the ovary's somatic cells to transdifferentiate to the equivalent cell types ordinarily found in the testes. Polled Intersex Syndrome in goats is caused by a biallelic loss-of-function in FOXL2 transcription and leads to in utero female-to-male sex-reversal.
Clinical significance:
Eyebrow thickness Several SNPs (Single Variant Polymorphisms) in the genomic region 3q23 overlapping the forkhead box L2 (FOXL2) were found associated with eyebrow thickness. In Europeans, East Asians, and South Asians, the derived allele is above ~90% frequency, and in Africans, it is above ~75%. Native Americans, particularly Peruvians, have a relatively high frequency of the homozygous ancestral allele, which significantly decreases eyebrow thickness. All primates and archaic humans share the ancestral allele.
Clinical significance:
Blepharophimosis–ptosis–epicanthus inversus syndrome Mutations in this gene are a cause of blepharophimosis, ptosis, epicanthus inversus syndrome and/or premature ovarian failure (POF) 3. Predicting the occurrence of POF based on the nature of the missense mutations in FOXL2 was a medical challenge. However, a correlation between the transcriptional activity of FOXL2 variants and the type of BPES was found. Moreover, by studying the effects of natural and artificial mutations in the forkhead domain of FOXL2, a clear correlation between the orientation of amino-acid side chains in the DNA-binding domain and transcriptional activity is founded, providing the first (in silico) predictive tool of the effects of FOXL2 missense mutations.
Clinical significance:
Adult granulosa cell tumors A missense mutation in the FOXL2 gene, C134W, is typically found in adult granulosa cell tumors but not in other ovarian cancers nor in juvenile granulosa cell tumors.
Endometriosis In addition to ovarian expression of FOXL2, there have been recent studies to suggest that overexpression of FOXL2 has been implicated in endometriosis in addition to activin A.
Clinical significance:
Other deregulations One study has found that FOXL2 is required for SF-1-induced ovarian AMH regulation by interactions between FOXL2 protein and SF-1; a mutated FOXL2 could not interact with SF-1 normally and thus could not regulate ovarian AMH as normal.In a knockout study in mice, the granulosa cells of the ovaries failed to undergo the squamous-to-cuboidal transition, which led to the arrest of folliculogenesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brown HT**
Brown HT:
Brown HT, also called Chocolate Brown HT, Food Brown 3, and C.I. 20285, is a brown synthetic coal tar diazo dye.
Brown HT:
When used as a food dye, its E number is E155. It is used to substitute cocoa or caramel as a colorant. It is used mainly in chocolate cakes, but can also be found in desserts, cookies, candy, cheeses, teas, yogurts, jams, chocolate drinks, ice creams, fruit products, fish, wafers, breakfast cereals, and other products.It is approved for use by the European Union. It is banned in Australia, Austria, Belgium, Denmark, France, Germany, Norway, Sweden, Switzerland, and the United States. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linux namespaces**
Linux namespaces:
Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources. The feature works by having the same namespace for a set of resources and processes, but those namespaces refer to distinct resources. Resources may exist in multiple spaces. Examples of such resources are process IDs, host-names, user IDs, file names, some names associated with network access, and Inter-process communication.
Linux namespaces:
Namespaces are a fundamental aspect of containers in Linux.
The term "namespace" is often used for a type of namespace (e.g. process ID) as well as for a particular space of names.
A Linux system starts out with a single namespace of each type, used by all processes. Processes can create additional namespaces and also join different namespaces.
History:
Linux namespaces were inspired by the wider namespace functionality used heavily throughout Plan 9 from Bell Labs.The Linux Namespaces originated in 2002 in the 2.4.19 kernel with work on the mount namespace kind. Additional namespaces were added beginning in 2006 and continuing into the future.
Adequate containers support functionality was finished in kernel version 3.8 with the introduction of User namespaces.
Namespace kinds:
Since kernel version 5.6, there are 8 kinds of namespaces. Namespace functionality is the same across all kinds: each process is associated with a namespace and can only see or use the resources associated with that namespace, and descendant namespaces where applicable. This way each process (or process group thereof) can have a unique view on the resources. Which resource is isolated depends on the kind of namespace that has been created for a given process group.
Namespace kinds:
Mount (mnt) Mount namespaces control mount points. Upon creation the mounts from the current mount namespace are copied to the new namespace, but mount points created afterwards do not propagate between namespaces (using shared subtrees, it is possible to propagate mount points between namespaces).
The clone flag used to create a new namespace of this type is CLONE_NEWNS - short for "NEW NameSpace". This term is not descriptive (it does not tell which kind of namespace is to be created) because mount namespaces were the first kind of namespace and designers did not anticipate there being any others.
Namespace kinds:
Process ID (pid) The PID namespace provides processes with an independent set of process IDs (PIDs) from other namespaces. PID namespaces are nested, meaning when a new process is created it will have a PID for each namespace from its current namespace up to the initial PID namespace. Hence the initial PID namespace is able to see all processes, albeit with different PIDs than other namespaces will see processes with.
Namespace kinds:
The first process created in a PID namespace is assigned the process ID number 1 and receives most of the same special treatment as the normal init process, most notably that orphaned processes within the namespace are attached to it. This also means that the termination of this PID 1 process will immediately terminate all processes in its PID namespace and any descendants.
Namespace kinds:
Network (net) Network namespaces virtualize the network stack. On creation, a network namespace contains only a loopback interface.
Each network interface (physical or virtual) is present in exactly 1 namespace and can be moved between namespaces.
Each namespace will have a private set of IP addresses, its own routing table, socket listing, connection tracking table, firewall, and other network-related resources.
Destroying a network namespace destroys any virtual interfaces within it and moves any physical interfaces within it back to the initial network namespace.
Namespace kinds:
Inter-process Communication (ipc) IPC namespaces isolate processes from SysV style inter-process communication. This prevents processes in different IPC namespaces from using, for example, the SHM family of functions to establish a range of shared memory between the two processes. Instead, each process will be able to use the same identifiers for a shared memory region and produce two such distinct regions.
Namespace kinds:
UTS UTS (UNIX Time-Sharing) namespaces allow a single system to appear to have different host and domain names to different processes. "When a process creates a new UTS namespace ... the hostname and domain of the new UTS namespace are copied from the corresponding values in the caller's UTS namespace." User ID (user) User namespaces are a feature to provide both privilege isolation and user identification segregation across multiple sets of processes available since kernel 3.8. With administrative assistance it is possible to build a container with seeming administrative rights without actually giving elevated privileges to user processes. Like the PID namespace, user namespaces are nested and each new user namespace is considered to be a child of the user namespace that created it.
Namespace kinds:
A user namespace contains a mapping table converting user IDs from the container's point of view to the system's point of view. This allows, for example, the root user to have user id 0 in the container but is actually treated as user id 1,400,000 by the system for ownership checks. A similar table is used for group id mappings and ownership checks.
Namespace kinds:
To facilitate privilege isolation of administrative actions, each namespace type is considered owned by a user namespace based on the active user namespace at the moment of creation. A user with administrative privileges in the appropriate user namespace will be allowed to perform administrative actions within that other namespace type. For example, if a process has administrative permission to change the IP address of a network interface, it may do so as long as its own user namespace is the same as (or ancestor of) the user namespace that owns the network namespace. Hence the initial user namespace has administrative control over all namespace types in the system.
Namespace kinds:
Control group (cgroup) Namespace The cgroup namespace type hides the identity of the control group of which process is a member. A process in such a namespace, checking which control group any process is part of, would see a path that is actually relative to the control group set at creation time, hiding its true control group position and identity. This namespace type has existed since March 2016 in Linux 4.6.
Namespace kinds:
Time Namespace The time namespace allows processes to see different system times in a way similar to the UTS namespace. It was proposed in 2018 and landed on Linux 5.6, which was released in March 2020.
Proposed namespaces syslog namespace The syslog namespace was proposed by Rui Xiang, an engineer at Huawei, but wasn't merged into the linux kernel. systemd implemented a similar feature called “journal namespace” in February 2020.
Implementation details:
The kernel assigns each process a symbolic link per namespace kind in /proc/<pid>/ns/. The inode number pointed to by this symlink is the same for each process in this namespace. This uniquely identifies each namespace by the inode number pointed to by one of its symlinks.
Reading the symlink via readlink returns a string containing the namespace kind name and the inode number of the namespace.
Syscalls Three syscalls can directly manipulate namespaces: clone, flags to specify which new namespace the new process should be migrated to.
unshare, allows a process (or thread) to disassociate parts of its execution context that are currently being shared with other processes (or threads) setns, enters the namespace specified by a file descriptor.
Destruction If a namespace is no longer referenced, it will be deleted, the handling of the contained resource depends on the namespace kind. Namespaces can be referenced in three ways: by a process belonging to the namespace by an open filedescriptor to the namespace's file (/proc/<pid>/ns/<ns-kind>) a bind mount of the namespace's file (/proc/<pid>/ns/<ns-kind>)
Adoption:
Various container software use Linux namespaces in combination with cgroups to isolate their processes, including Docker and LXC.
Other applications, such as Google Chrome make use of namespaces to isolate its own processes which are at risk from attack on the internet.There is also an unshare wrapper in util-linux. An example of its use is: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Urodynamic testing**
Urodynamic testing:
Urodynamic testing or urodynamics is a study that assesses how the bladder and urethra are performing their job of storing and releasing urine. Urodynamic tests can help explain symptoms such as: incontinence frequent urination sudden, strong urges to urinate but nothing comes out problems starting a urine stream painful urination problems emptying the bladder completely (Vesical tenesmus, detrusor failure) recurrent urinary tract infectionsUrodynamic tests are usually performed in Urology, Gynecology, OB/GYN, Internal medicine, and Primary care offices. Urodynamics will provide the physician with the information necessary to diagnose the cause and nature of a patient's incontinence, thus giving the best treatment options available. Urodynamics is typically conducted by urologists or urogynecologists.
Purpose of testing:
The tests are most often arranged for men with enlarged prostate glands, and for women with incontinence that has either failed conservative treatment or requires surgery.
Purpose of testing:
Probably the most important group in whom these tests are performed are those with a neuropathy such as spinal injury. In some of these patients (dependent on the level of the lesion), the micturition reflex can be essentially out of control and the detrusor pressures generated can be life-threatening.Symptoms reported by the patient are an unreliable guide to the underlying dysfunction of the lower urinary tract. The purpose of urodynamics is to provide objective confirmation of the pathology that a patient's symptoms would suggest.For example, a patient complaining of urinary urgency (or rushing to the toilet), with increased frequency of urination can have overactive bladder syndrome. The cause of this might be detrusor overactivity, in which the bladder muscle (the detrusor) contracts unexpectedly during bladder filling. Urodynamics can be used to confirm the presence of detrusor overactivity, which may help guide treatment. An overactive detrusor can be associated with urge incontinence. The American Urogynecologic Society does not recommend that urodynamics are part of initial diagnosis for uncomplicated overactive bladder.
Specific tests:
These tests may be as simple as urinating behind a curtain while a doctor listens, but are usually more extensive in western medicine. A typical urodynamic test takes about 30 minutes to perform. It involves the use of a small catheter used to fill the bladder and record measurements. What is done depends on what the presenting problem is, but some of the common tests conducted are; Post-void residual volume: Most tests begin with the insertion of a urinary catheter/transducer following complete bladder emptying by the patient. The urine volume is measured (this shows how efficiently the bladder empties). High volumes (180 ml) may be associated with urinary tract infections. A volume of greater than 50 ml in children has been described as constituting post-void residual urine. High levels can be associated with overflow incontinence.
Specific tests:
The urine is often sent for microscopy and culture to check for infection.
Uroflowmetry: Free uroflowmetry measures how fast the patient can empty his/her bladder. Pressure uroflowmetry again measures the rate of voiding, but with simultaneous assessment of bladder and rectal pressures. It helps demonstrate the reasons for difficulty in voiding, for example bladder muscle weakness or obstruction of the bladder outflow.
Multichannel cystometry: measures the pressure in the rectum and in the bladder, using two pressure catheters, to deduce the presence of contractions of the bladder wall, during bladder filling, or during other provocative maneuvers. The strength of the urethra can also be tested during this phase, using a cough or Valsalva maneuver, to confirm genuine stress incontinence.
Urethral pressure profilometry: measures strength of sphincter contraction.
Electromyography (EMG) measurement of electrical activity in the bladder neck.
Assessing the "tightness" along the length of the urethra.
Fluoroscopy (moving video x-rays) of the bladder and bladder neck during voiding.
Standardization:
Men with benign prostate hyperplasia are influenced by urination position: sitting improves three measures — namely, the maximum urinary flow rate (Qmax), voiding time (TQ) and post-void residual volume (PVR). Qmax, in particular, improves by an amount similar to that achievable with four alpha-1 blockers, medicines commonly prescribed for BPH. This information offers a non-pharmaceutical way of managing the condition, and shows that urodynamics measurements should use a standardized position, to avoid misleading results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shuf**
Shuf:
shuf is a command-line utility included in the textutils package of GNU Core Utilities for creating a standard output consisting of random permutations of the input.
The version of shuf bundled in GNU coreutils was written by Paul Eggert. It is not a part of POSIX. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cortisol awakening response**
Cortisol awakening response:
The cortisol awakening response (CAR) is an increase between 38% and 75% in cortisol levels peaking 30–45 minutes after awakening in the morning in some people. This rise is superimposed upon the late-night rise in cortisol which occurs before awakening. While its purpose is uncertain, it may be linked to the hippocampus' preparation of the hypothalamic-pituitary-adrenal axis (HPA) in order to face anticipated stress.
Description:
Shortly after awakening, a sharp 38–75% (average 50%) increase occurs in the blood level of cortisol in about 77% of healthy people of all ages. The average level of salivary cortisol upon waking is roughly 15 nmol/L; 30 minutes later it may be 23 nmol/L, though there are wide variations. The cortisol awakening response reaches a maximum approximately 30 minutes after awakening though it may still be heightened by 34% an hour after waking. The pattern of this response to waking is relatively stable for any individual. Twin studies show its pattern is largely genetically determined since there is a heritability of 0.40 for the mean cortisol increase after awakening and 0.48 for the area under the cortisol rise curve.Normally, the highest cortisol secretion happens in the second half of the night with peak cortisol production occurring in the early morning. Following this, cortisol levels decline throughout the day with lowest levels during the first half of the night. Cortisol awakening response is independent of this circadian variation in HPA axis activity; it is superimposed upon the daily rhythm of HPA axis activity, and it seems to be linked specifically to the event of awakening.Cortisol awakening response provides an easy measure of the reactivity capacity of the HPA axis.
Sleep factors:
Waking up earlier in the morning increases the response.
Sleep factors:
Shift work: nurses working on morning shifts with very early awakening (between 4:00–5:30 a.m.) had a greater and prolonged cortisol awakening response than those on the late day shift (between 6:00–9:00 a.m.) or the night shift (between 11:00 a.m.–2:00 p.m.). However another study found that this greater response could be attributed to increased stress and impaired sleep quality before an early work shift ("when these factors were taken into account, the difference in CAR related to experimental condition was no longer significant").
Sleep factors:
Naps: students taking a nap of one to two hours in the early evening hours (between 6:45–8:30 p.m.) had no cortisol awakening response, suggesting cortisol awakening response only occurs after night sleep.
Waking up in the light: cortisol awakening response is larger when people wake up in light rather than darkness.
Noise: there is no cortisol rise after nights with traffic-like low-frequency noise.
Alarm clock vs. spontaneous waking: there is no difference on days when people woke up spontaneously or used the alarm clock.
Aspirin has been found to reduce the response probably through an action upon ACTH.
Individual factors:
Morning types show a larger cortisol awakening response than evening types.
Those with fatigue show a low rise and flat plateau.
Those in pain: the response is reduced the more people are in pain.
The lower a person's socioeconomic status, the higher their response. This might link to the material hardship that occurs with low socioeconomic status.
Stress:
Cortisol awakening response is larger for those: Waking up to a working day compared to work-free weekend day.
Experiencing chronic stress and worry.
Overloaded with work.
In acute stress. People taking part in a competitive ballroom dance tournament had an increased cortisol awakening response on the morning of their competition day but not their non-competition one.
Worn down by burnout: some studies find an increased response, though other researchers find a decreased or normal response.
Neurology:
Cortisol is released from the adrenal glands following activation by ACTH release from the pituitary. The ACTH release creating the cortisol awakening response is strongly inhibited after intake of a low-dose dexamethasone. This is a synthetic glucocorticoid and this inhibition allows the detection of the presence of negative feedback from circulating cortisol that controls to ACTH-secreting cells of the pituitary.
Neurology:
In the hypothalamic-pituitary-adrenal axis the pituitary release of ACTH is regulated by the hypothalamus. This occurs through the hypothalamus's production of the hypophysiotropic hormone corticotropin-releasing hormone, the production of which is subject to circadian influence and the day/night cycle. In the cortisol awakening response, the hypothalamic-pituitary-adrenal axis is controlled by the hippocampus. For example, cortisol awakening response is absent in those with bilateral and unilateral hippocampus damage and hippocampal atrophy. Those with severe amnesia, and thus with presumed damage to the temporal lobe, also do not have it. Those with a larger hippocampus have a greater response.It's plausible also that the suprachiasmatic nucleus, the light-sensitive biological clock, plays a role in cortisol awakening response regulation.
Function:
The function of cortisol awakening response is unknown but it has been suggested to link with a stress-related preparation in regard to the upcoming day by the hippocampus. One hypothesis is: "that the cortisol rise after awakening may accompany an activation of prospective memory representations at awakening enabling individual's orientation about the self in time and space as well as anticipation of demands of the upcoming day... it is tempting to speculate that for the CAR, anticipation of these upcoming demands may be essential in regulating the CAR magnitude for the particular day. The hippocampus is, besides its established role in long-term memory consolidation, involved in the formation of a cohesive construct and representation of the outside world within the central nervous system processing information about space, time and relationships of environmental cues. This puts the hippocampus in a pivotal position for the regulation of the CAR." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unimodality**
Unimodality:
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.
Unimodal probability distribution:
In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability.
Unimodal probability distribution:
Figure 2 and Figure 3 illustrate bimodal distributions.
Other definitions Other definitions of unimodality in distribution functions also exist.
Unimodal probability distribution:
In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode.
Unimodal probability distribution:
Criteria for unimodality can also be defined through the characteristic function of the distribution or through its Laplace–Stieltjes transform.Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. A discrete distribution with a probability mass function, {pn:n=…,−1,0,1,…} , is called unimodal if the sequence …,p−2−p−1,p−1−p0,p0−p1,p1−p2,… has exactly one sign change (when zeroes don't count).
Unimodal probability distribution:
Uses and results One reason for the importance of distribution unimodality is that it allows for several important results. Several inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on multimodal distribution.
Unimodal probability distribution:
Inequalities Gauss's inequality A first important result is Gauss's inequality. Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality.
Vysochanskiï–Petunin inequality A second is the Vysochanskiï–Petunin inequality, a refinement of the Chebyshev inequality. The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke.
Mode, median and mean Gauss also showed in 1823 that for a unimodal distribution σ≤ω≤2σ and |ν−μ|≤34ω, where the median is ν, the mean is μ and ω is the root mean square deviation from the mode.
It can be shown for a unimodal distribution that the median ν and the mean μ lie within (3/5)1/2 ≈ 0.7746 standard deviations of each other. In symbols, |ν−μ|σ≤35 where | . | is the absolute value.
In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average qα+q(1−α)2 and the mean, for for for α∈(0,16].
Unimodal probability distribution:
It is worth noting that the maximum distance is minimized at 0.5 (i.e., when the symmetric quantile average is equal to 0.5 =ν ), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when 0.5 , the bound is equal to 3/5 , which is the maximum distance between the median and the mean of a unimodal distribution.
Unimodal probability distribution:
A similar relation holds between the median and the mode θ: they lie within 31/2 ≈ 1.732 standard deviations of each other: |ν−θ|σ≤3.
It can also be shown that the mean and the mode lie within 31/2 of each other: |μ−θ|σ≤3.
Unimodal probability distribution:
Skewness and kurtosis Rohatgi and Szekely claimed that the skewness and kurtosis of a unimodal distribution are related by the inequality: 1.2 where κ is the kurtosis and γ is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide.They derived a weaker inequality which applies to all unimodal distributions: 186 125 1.488 This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}.
Unimodal function:
As the term "modal" applies to data sets and probability distribution, and not in general to functions, the definitions above do not apply. The definition of "unimodal" was extended to functions of real numbers as well. A common definition is as follows: a function f(x) is a unimodal function if for some value m, it is monotonically increasing for x ≤ m and monotonically decreasing for x ≥ m. In that case, the maximum value of f(x) is f(m) and there are no other local maxima.
Unimodal function:
Proving unimodality is often hard. One way consists in using the definition of that property, but it turns out to be suitable for simple functions only. A general method based on derivatives exists, but it does not succeed for every function despite its simplicity.
Examples of unimodal functions include quadratic polynomial functions with a negative quadratic coefficient, tent map functions, and more.
Unimodal function:
The above is sometimes related to as strong unimodality, from the fact that the monotonicity implied is strong monotonicity. A function f(x) is a weakly unimodal function if there exists a value m for which it is weakly monotonically increasing for x ≤ m and weakly monotonically decreasing for x ≥ m. In that case, the maximum value f(m) can be reached for a continuous range of values of x. An example of a weakly unimodal function which is not strongly unimodal is every other row in Pascal's triangle.
Unimodal function:
Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than maximum. For example, local unimodal sampling, a method for doing numerical optimization, is often demonstrated with such a function. It can be said that a unimodal function under this extension is a function with a single local extremum.
One important property of unimodal functions is that the extremum can be found using search algorithms such as golden section search, ternary search or successive parabolic interpolation.
Other extensions:
A function f(x) is "S-unimodal" (often referred to as "S-unimodal map") if its Schwarzian derivative is negative for all x≠c , where c is the critical point.In computational geometry if a function is unimodal it permits the design of efficient algorithms for finding the extrema of the function.A more general definition, applicable to a function f(X) of a vector variable X is that f is unimodal if there is a one-to-one differentiable mapping X = G(Z) such that f(G(Z)) is convex. Usually one would want G(Z) to be continuously differentiable with nonsingular Jacobian matrix.
Other extensions:
Quasiconvex functions and quasiconcave functions extend the concept of unimodality to functions whose arguments belong to higher-dimensional Euclidean spaces. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HaloTag**
HaloTag:
HaloTag is a self-labeling protein tag. It is a 297 residue protein (33 kDa) derived from a bacterial enzyme, designed to covalently bind to a synthetic ligand. The bacterial enzyme can be fused to various proteins of interest. The synthetic ligand is chosen from a number of available ligands in accordance with the type of experiments to be performed. This bacterial enzyme is a haloalkane dehalogenase, which acts as a hydrolase and is designed to facilitate visualization of the subcellular localization of a protein of interest, immobilization of a protein of interest, or capture of the binding partners of a protein of interest within its biochemical environment. The HaloTag is composed of two covalently bound segments including a haloalkane dehalogenase and a synthetic ligand of choice. These synthetic ligands consist of a reactive chloroalkane linker bound to a functional group. Functional groups can either be biotin (can be used as an affinity tag) or can be chosen from five available fluorescent dyes including Coumarin, Oregon Green, Alexa Fluor 488, diAcFAM, and TMR. These fluorescent dyes can be used in the visualization of either living or chemically fixed cells.
Mechanism:
The HaloTag is a hydrolase, which has a genetically modified active site, which specifically binds the reactive chloroalkane linker and has an increased rate of ligand binding. The reaction that forms the bond between the protein tag and chloroalkane linker is fast and essentially irreversible under physiological conditions due to the terminal chlorine of the linker portion. In the aforementioned reaction, nucleophilic attack of the chloroalkane reactive linker causes displacement of the halogen with an amino acid residue, which results in the formation of a covalent alkyl-enzyme intermediate. This intermediate would then be hydrolyzed by an amino acid residue within the wild-type hydrolase. This would lead to regeneration of the enzyme following the reaction. However, in the modified haloalkane dehalogenase (HaloTag), the reaction intermediate cannot proceed through a subsequent reaction because it cannot be hydrolyzed due to the mutation in the enzyme. This causes the intermediate to persist as a stable covalent adduct with which there is no associated back reaction.
Uses:
HaloTagged fusion proteins can be expressed using standard recombinant protein expression techniques. Furthermore, there are several commercial vectors available that just require insertion of a gene of interest. Since bacterial dehalogenases are relatively small and the reactions described above are foreign to mammalian cells, there is no interference by endogenous mammalian metabolic reactions. Once the fusion protein has been expressed, there is a wide range of potential areas of experimentation including enzymatic assays, cellular imaging, protein arrays, determination of sub-cellular localization, and many additional possibilities.Recently, HaloTag has been engineered to create hybrid protein + small molecule biosensors of neuronal activity. These sensors undergo a conformational change in response to calcium concentration spikes during neuronal firing; this conformational change modulates the conformation of a HaloTag-bound dye molecule. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Curse of knowledge**
Curse of knowledge:
The curse of knowledge is a cognitive bias that occurs when an individual, who is communicating with other individuals, assumes that other individuals have similar background and depth of knowledge to understand. This bias is also called by some authors the curse of expertise.For example, in a classroom setting, teachers may have difficulty if they cannot put themselves in the position of the student. A knowledgeable professor might no longer remember the difficulties that a young student encounters when learning a new subject for the first time. This curse of knowledge also explains the danger behind thinking about student learning based on what appears best to faculty members, as opposed to what has been verified with students.
History of concept:
The term "curse of knowledge" was coined in a 1989 Journal of Political Economy article by economists Colin Camerer, George Loewenstein, and Martin Weber. The aim of their research was to counter the "conventional assumptions in such (economic) analyses of asymmetric information in that better-informed agents can accurately anticipate the judgement of less-informed agents".Such research drew from Baruch Fischhoff's work in 1975 surrounding hindsight bias, a cognitive bias that knowing the outcome of a certain event makes it seem more predictable than may actually be true. Research conducted by Fischhoff revealed that participants did not know that their outcome knowledge affected their responses, and, if they did know, they could still not ignore or defeat the effects of the bias. Study participants could not accurately reconstruct their previous, less knowledgeable states of mind, which directly relates to the curse of knowledge. This poor reconstruction was theorized by Fischhoff to be because the participant was "anchored in the hindsightful state of mind created by receipt of knowledge". This receipt of knowledge returns to the idea of the curse proposed by Camerer, Loewenstein, and Weber: a knowledgeable person cannot accurately reconstruct what a person, be it themselves or someone else, without the knowledge would think, or how they would act. In his paper, Fischhoff questions the failure to empathize with ourselves in less knowledgeable states, and notes that how well people manage to reconstruct perceptions of lesser informed others is a crucial question for historians and "all human understanding".This research led the economists Camerer, Loewenstein, and Weber to focus on the economic implications of the concept and question whether the curse harms the allocation of resources in an economic setting. The idea that better-informed parties may suffer losses in a deal or exchange was seen as something important to bring to the sphere of economic theory. Most theoretical analyses of situations where one party knew less than the other focused on how the lesser-informed party attempted to learn more information to minimize information asymmetry. However, in these analyses, there is an assumption that better-informed parties can optimally exploit their information asymmetry when they, in fact, cannot. People cannot utilize their additional, better information, even when they should in a bargaining situation.For example, two people are bargaining over dividing money or provisions. One party may know the size of the amount being divided while the other does not. However, to fully exploit their advantage, the informed party should make the same offer regardless of the amount of material to be divided. But informed parties actually offer more when the amount to be divided is larger. Informed parties are unable to ignore their better information, even when they should.
Experimental evidence:
A 1990 experiment by a Stanford University graduate student, Elizabeth Newton, illustrated the curse of knowledge in the results of a simple task. A group of subjects were asked to "tap" out well known songs with their fingers, while another group tried to name the melodies. When the "tappers" were asked to predict how many of the "tapped" songs would be recognized by listeners, they would always overestimate. The curse of knowledge is demonstrated here as the "tappers" are so familiar with what they were tapping that they assumed listeners would easily recognize the tune.A study by Susan Birch and Paul Bloom involving Yale University undergraduate students used the curse of knowledge concept to explain the idea that the ability of people to reason about another person's actions is compromised by the knowledge of the outcome of an event. The perception the participant had of the plausibility of an event also mediated the extent of the bias. If the event was less plausible, knowledge was not as much of a "curse" as when there was a potential explanation for the way the other person could act. However, a recent replication study found that this finding was not reliably reproducible across seven experiments with large sample sizes, and the true effect size of this phenomenon was less than half of that reported in the original findings. Therefore, it is suggested that "the influence of plausibility on the curse of knowledge in adults appears to be small enough that its impact on real-life perspective-taking may need to be reevaluated."Other researchers have linked the curse of knowledge bias with false-belief reasoning in both children and adults, as well as theory of mind development difficulties in children.
Experimental evidence:
Related to this finding is the phenomenon experienced by players of charades: the actor may find it frustratingly hard to believe that their teammates keep failing to guess the secret phrase, known only to the actor, conveyed by pantomime.
Implications:
In the Camerer, Loewenstein and Weber article, it is mentioned that the setting closest in structure to the market experiments done would be underwriting, a task in which well-informed experts price goods that are sold to a less-informed public.
Implications:
Investment bankers value securities, experts taste cheese, store buyers observe jewelry being modeled, and theater owners see movies before they are released. They then sell those goods to a less-informed public. If they suffer from the curse of knowledge, high-quality goods will be overpriced and low-quality goods underpriced relative to optimal, profit-maximizing prices; prices will reflect characteristics (e.g., quality) that are unobservable to uninformed buyers ("you get what you pay for").The curse of knowledge has a paradoxical effect in these settings. By making better-informed agents think that their knowledge is shared by others, the curse helps alleviate the inefficiencies that result from information asymmetries (a better informed party having an advantage in a bargaining situation), bringing outcomes closer to complete information. In such settings, the curse on individuals may actually improve social welfare.
Applications:
Marketing Economists Camerer, Loewenstein, and Weber first applied the curse of knowledge phenomenon to economics, in order to explain why and how the assumption that better-informed agents can accurately anticipate the judgments of lesser-informed agents is not inherently true. They also sought to support the finding that sales agents who are better informed about their products may, in fact, be at a disadvantage against other, less-informed agents when selling their products. The reason is said to be that better-informed agents fail to ignore the privileged knowledge that they possess and are thus "cursed" and unable to sell their products at a value that more naïve agents would deem acceptable.
Applications:
Education It has also been suggested that the curse of knowledge could contribute to the difficulty of teaching. The curse of knowledge means that it could be potentially ineffective, if not harmful, to think about how students are viewing and learning material by asking the perspective of the teacher as opposed to what has been verified by students. The teacher already has the knowledge that they are trying to impart, but the way that knowledge is conveyed may not be the best for those who do not already possess the knowledge.
Applications:
The curse of expertise may be counterproductive for learners acquiring new skills. This is important because the predictions of experts can influence educational equity and training as well as the personal development of young people, not to mention the allocation of time and resources to scientific research and crucial design decisions. Effective teachers must predict the issues and misconceptions that people will face when learning a complex new skill or understanding an unfamiliar concept. This should also encompass the teachers’ recognizing their own or each other's bias blind spots.
Applications:
Steven Pinker, a Canadian-born American cognitive scientist and psychologist, listed several problems with the ways English is used in academic settings: abstract language unrelated to reality; clumsy transitions between related topics; inept interpretations of external sources; Using clichés and catchphrases whose true meaning is obscure; creating "zombie nouns", from verbs or adjectives (e.g. “verb+ization”); compulsive "hedging", through overuse of expressions such as "somewhat", "comparatively", and "to a certain degree".Quality assurance (QA) is a way of circumventing the curse of experience by applying comprehensive quality management techniques. Professionals by definition get paid for technically well defined work so that quality control procedures may be required which encompass the processes employed, the training of the expert and the ethos of the trade or profession of the expert. Some experts (lawyers, physicians, etc.) require a licence which may include a requirement to undertake ongoing professional development (i.e. obtain OPD credits issued by collegiate universities or professional associations – see also normative safety.) Decoding the Disciplines is another way of coping with the curse of knowledge in educational settings. It intends to increase student learning by narrowing the gap between expert and novice thinking resulting from the curse of knowledge. The process seeks to make explicit the tacit knowledge of experts and to help students master the mental actions they need for success in particular disciplines.
Applications:
Academics Academics are usually employed in research and development activities that are less well understood than those of professionals, and therefore submit themselves to peer review assessment by other appropriately qualified individuals.
Applications:
Computer programming It can also show up in computer programming where the programmer fails to produce understandable code, e.g. comment their code, because it seems obvious at the time they write it. But a few months later they themselves may have no idea why the code exists. The design of user interfaces is another example from the software industry, whereby software engineers (who have a deep understanding of the domain the software is written for) create user interfaces that they themselves can understand and use, but end users - who do not possess the same level of knowledge - find the user interfaces difficult to use and navigate. This problem has become so widespread in software design that the mantra "You are not the user" has become ubiquitous in the user experience industry to remind practitioners that their knowledge and intuitions do not always match those of the end users they are designing for.
Applications:
To-do lists Another example is writing a to-do list and viewing it at a future time but forgetting what you had meant as the knowledge at the time of writing is now lost.
Popular culture:
The difficulty experienced people may encounter is exemplified fictionally by Dr. Watson in discourses with the insightful detective Sherlock Holmes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Deposit slip**
Deposit slip:
A deposit slip is a form supplied by a bank for a depositor to fill out, designed to document in categories the items included in the deposit transaction. The categories include type of item, and if it is a cheque, where it is from such as a local bank or a state if the bank is not local. The teller keeps the deposit slip along with the deposit (cash and cheques), and provides the depositor with a receipt. They are filled in a store and not a bank, so it is very convenient in paying. They also are a means of transport of money. Pay-in slips encourage the sorting of cash and coins, are filled in and signed by the person who deposited the money, and some tear off from a record that is also filled in by the depositor. Deposit slips are also called deposit tickets and come in a variety of designs. They are signed by the depositor if the depositor is cashing some of the accompanying check and depositing the rest.
Cash received:
On a deposit slip, "cash received" means that part of the amount on a cheque that is to be withdrawn as cash. The remainder is deposited into the person's account.
Completion of slips:
The description column on deposit slips has been used for over 100 years in the U.S. to notate where the bank should send the check to reclaim the money; this was done at first by notating in words the name of bank or its location. The bank's transit number, also called bank number, began to be used instead of words. The bank number was written as the upper line of a fraction, with the bottom number referring to the central bank branch. Some people wrote just the top of the fraction, others tried writing the entire fraction. After the introduction of automated sorting of checks, many people wrote nothing at all in the deposit slip's description column. Some people put the check writers' names in the description column. There was a tendency in the early teens of the 21st century to write in the number of the check being deposited without mentioning who the check was from. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPD1L**
GPD1L:
GPD1L is a human gene. The protein encoded by this gene contains a glycerol-3-phosphate dehydrogenase (NAD+) motif and shares 72% sequence identity with GPD1.
Structure:
GPD1L contains the following domains: N-terminal – NAD+ consensus binding site a site homologous to the cardiac sodium channel SCN5A C-terminal lysine-206 residue
Tissue distribution:
Northern blot analysis detected a single GPD1L transcript in all tissues examined except liver. Highest expression was in heart and skeletal muscle.
Disease linkage:
Mutations in the GPD1L gene are associated with the Brugada syndrome and sudden infant death syndrome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Packet Design**
Packet Design:
Packet Design is an Austin, Texas-based network performance management software company credited with pioneering route analytics technology. This network monitoring technology analyzes routing protocols and structures in meshed IP networks by participating as a peer in the network to passively “listen” to Layer 3 routing protocol exchanges between routers for the purpose of network discovery, mapping, real-time monitoring and routing diagnostics.The company maintains offices in San Jose, CA, Austin, TX, Dubai, UAE, and Pune, India. In June 2018, Ciena announced it would acquire Packet Design LLC and the transaction closed July 2, 2018.
History:
Packet Design, Inc. was co-founded in 2003, by Judy Estrin and Bill Carrico, network computing executives who have started seven companies together during their careers. Estrin served as chief technology officer of Cisco Systems from 1998 to 2000, immediately prior to founding the company.Packet Design was acquired by Lone Rock Technology Group, the private equity firm of former NetQoS CEO, Joel Trammell, in March 2013. With the acquisition, Packet Design, Inc. became Packet Design, LLC and Scott Sherwood was named as CEO.For a number of years, Hewlett Packard offered OEM-licensed versions of Packet Design's products integrated alongside their HP Network Node Manager called HP Route Analytics Management Software (RAMS).
History:
Packet Design terminated the agreement and RAMS customers now receive support from Packet Design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dive start**
Dive start:
A dive start is the action begun at the start of a swimming race. In most strokes, the swimmer jumps off the diving blocks after hearing the starting signal. However, if it is a backstroke event, the swimmers will be starting in the water. All dives are followed by a streamline just like turning. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**PANTHER**
PANTHER:
In bioinformatics, the PANTHER (protein analysis through evolutionary relationships) classification system is a large curated biological database of gene/protein families and their functionally related subfamilies that can be used to classify and identify the function of gene products. PANTHER is part of the Gene Ontology Reference Genome Project designed to classify proteins and their genes for high-throughput analysis.
PANTHER:
The project consists of both manual curation and bioinformatics algorithms. Proteins are classified according to family (and subfamily), molecular function, biological process and pathway. It is one of the databases feeding into the European Bioinformatics Institute's InterPro database.—Application of PANTHER—The most important application of PANTHER is to accurately infer the function of uncharacterized genes from any organism based on their evolutionary relationships to genes with known functions. By combining gene function, ontology, pathways and statistical analysis tools, PANTHER enables biologists to analyze large-scale, genome-wide data obtained from the current advance technology including: sequencing, proteomics or gene expression experiments.
PANTHER:
Shortly, using the data and tools on the PANTHER, users will be able to: Obtain information about a particular gene of interest.
Discover protein families and subfamilies, pathways, biological processes, molecular functions and cellular components.
Create lists of genes related to a particular protein family/subfamily, molecular function, biological process or pathway.
Analyze lists of genes, proteins or transcripts.
PANTHER history:
1998:Project was launched at Molecular Application Group.
1999:Acquired by Celera Genomics.
2000:PANTHER 1 released in Celera Discovery Systems (CDS).
2001: PANTHER 2 released, which is used in the annotationon of the first published human genome Celera.
2002: PANTHER 3 released. PANTHER annotations are integrated in FlyBase. Moved to ABI.
2003: PANTHER 4 released with the public release of PANTHER Classification System.
2005: PANTHER 5 released with PANTHER Pathway and analysis tool. Establish collaboration with InterPro.
2006: PANTHER 6 released. Move to SRI.
2010: PANTHER 7 released.
2011: Move to USC.
2012: PANTHER 8 released.
2014: PANTHER 9 released.
2015: PANTHER 10 released.
2016: PANTHER 11 released.
Phylogenetic tree:
In PANTHER there is a phylogenetic tree for each of the protein families. The annotation of tree is done based on the following criteria: Each node is annotated by gene attributes including “subfamily membership”, “protein class”, “gene function”. These attributes are heritable. Swiss-Prot protein names are usually used to name subfamilies. Since PANTHER is part of the GO reference genome project, the Gene Ontology (GO) terms are used for gene function. PANTHER/X ontology terms are used for protein class.
Phylogenetic tree:
Each internal node is annotated by evolutionary events such as “speciation”, “gene duplication” and “horizontal gene transfer”.To generate phylogenetic trees, PANTHER uses GIGA algorithm. GIGA uses species tree to develop tree construction. On every iteration it attempts to reconcile tree in event form of speciation and gene duplication.
PANTHER library data generation process:
The process for data generation is divided into three steps: Family Clustering Pythologentic Tree Building Annotation of Tree Nodes Family clustering Sequence set PANTHER trees depicts gene family evolution from a broad selection of genomes which are fully sequenced. PANTHER have one sequence per gene so that the tree can represent event occurred over the course of evolution i.e duplication, speciation.
PANTHER library data generation process:
PANTHER genomes set are selected based on the following criteria: The set should include a major experimental model organism, this will assist in depicting functional information of the organism which are less studied.
The set should include a broad taxonomic range of other genomes, preferably fully sequenced and annotated, this will assist in relating experimental model organism.
Family clusters Following are the requirements for being family clusters in PANTHER: The family must contain at least five members among which at least one gene has to be from a GO reference genome.
In order to support phylogenetic inference, the family must contain a high quality sequence alignment.
The assessment of multiple aligned sequence is done by assessing a length of the aligned sequence, at least 30 sites aligned across 75% or more of family members.
PANTHER library data generation process:
Phylogenetic tree building For each family multiple sequence are aligned using a default setting of MAFFT, any column which is aligned less than 75% of the sequence is removed. This data is then used as an input for GIGA program. The output tree from GIGA are labelled. Each internal node is labelled as whether divergence event happened as speciation or gene duplication.
PANTHER library data generation process:
Annotation of tree nodes Each node in PANTHER tree is annotated with heritable attribute. Heritable attribute can be of three types subfamily membership, gene function and protein class membership. These annotation of nodes applies to primary sequence which was used to construct tree. In applying these annotation to primary sequence simple evolutionary principle is used i.e. each node annotation is propagated by its decedent node.
PANTHER components:
PANTHER/LIB (PANTHER library): Library consists of collection of books. Each of these books represents a protein family. There are a Hidden Markov Model (HMM), a multiple sequence alignment (MSA) and a family tree for each protein family in the library.PANTHER/X (PANTEHR index): Index contains abbreviated ontology which assist in summarizing, navigating molecular function and biological function. Although PANTHER/X ontology has a hierarchical organization, it is a directed acyclic graph and so when it is biologically justified, child categories appear under more than one parent. PANTHER/X has been mapped to GO and arranged in a different way to facilitate large scale analysis of proteins.
PANTHER Pathways:
PANTHER includes 176 pathway using CellDesigner tool. PANTHER pathways can be downloaded in the following file formats.
Systems Biology Markup Language (SBML) Systems Biology Graphical Notation (SBGN - ML) BioPAX
Recent versions of PANTHER and their statistics and updates:
Version 6.0 Version 6 uses UniProt sequences as training sequences. There are 19132 UniProt training sequences directly associated with the pathway components. This version has ~1500 reactions in 130 pathways, and the number of pathways associated with subfamilies were expanded. PANTHER became a member of the InterPro Consortium. The availability of PANTHER data was improved (the HMMs can be downloaded by FTP). The PANTHER/LIB version 6.1 contains 221609 UniProt sequences from 53 organisms, grouped into 5546 families and 24561 subfamilies. (2006) Version 7.0 In this version the phylogenetic trees represent speciation and gene duplication events. Identification of gene orthologs is possible. There are more support for alternative database identifiers for genes, proteins and microarray probes. PANTHER version 7 uses the SBGN standard to depict biological pathways. It includes 48 set of genomes. To define the new families and in collaboration with the European Bioinformatics Institute’s InterPro group, approximately 1000 families of non-animal genomes were added in this version. The sources of gene sets included model organism databases, Ensembl genome annotation and Entrez Gene. Since this version, a stable identifier to each node in the tree is used. This stable identifier is a nine-digit number with the prefix PTN (stand for PANTHER Tree Node). (2009) Version 8.0 (2012) The reference proteome set maintained by the UniProt resource is used in this version of PANTHER and so the source of gene sets is UniProt. It includes 82 set of genomes (approximately double compared with version 7) and 991985 protein coding genes from which 642319 genes (64.75%) have been used for family clusters. PANTHER website is redesigned to facilitate common user workflow.
Recent versions of PANTHER and their statistics and updates:
Version 9.0 (2014) This version contains 7180 protein families, divided into 52,768 functionally distinct protein subfamilies. Version 9.0 has genomes of all 85 organisms.
Version 11.1 (2016) This version contains 78442 subfamilies and 1,064,054 genes annotated.
PANTHER website:
The home page of PANTHER website shows several folder tabs for major workflows, including: gene list analysis, browse, sequence search, cSNP scoring, and keyword search. The details about each of these workflow are provided below.
Gene list analysis This tab is selected by default because this the most frequently used option. You can enter valid IDs in the box or upload a file, then select list type, choose organism of interest and select the type of analysis.
PANTHER website:
A practical example: Let's try this workflow using an example of a small gene list containing three genes AKT1, AKT2, AKT3. We first type these gene names within the box and separate them by comma (or space). We select "ID list" as list type, "Homo Sapiens" (human) as organism, and " Functional classification viewed in gene list" as the type of operation; then click submit. It gives you the information for all the three genes which are: Gene IDs from Ensembl and protein IDs from Uniprot: in terms of this example, you must see "ENSG00000142208" and "P31749".
PANTHER website:
Mapped IDs: these are simply the names of the genes which have been mapped to your query (AKT1, AKT2 and AKT3) Gene names, gene symbols, and the orthologs: the orthologs are clickable and by clicking on them you can see the list of other organisms and their IDs as well as the type of orthologs ("LDO" for least diverged ortholog, "O" for other which are more diverged orthologs, and "P" for paralogs).
PANTHER website:
PANTHER family and subfamily: This will give you the name of family and subfamily for your genes. There are some links, e.g. a link to the family tree, which is clickable. Finally you will have the genes from different species assigned to that subfamily. In this example you have the PANTHER subfamily "PTHR24352:SF30" for AKT1.
GO molecular function: This tell you what are the functions of your query gene; e.g. AKT1 has protein kinase activity and can selectively and non-covalently interact with calcium ions, calmodulin, and phospholipids.
GO biological process: By looking at this column, you will understand what biological processes the gene involved in; e.g. AKT1 has role in gamete generation, apoptosis, cell cycle, etc.
GO cellular component: It tells you where in the cell you can find your query protein. In our example, the information is not available but if you try another examples (such as the gene p53), you will see some cellular components such as "nucleus", "cytoplasm", "chromosomes", etc.
PANTHER protein class: this gives you the names and IDs of PANTHER protein class for each of the genes; e.g. AKT1 is under PANTHER protein class "non-receptor serine/threonine protein kinase" with class ID "PC00167". You can also see its parent and child lineage.
Pathways: A list of clickable names of the pathways in which your query gene exists will be shown; e.g. AKT1 is involved in several pathways such as "Hypoxia response via HIF", "Apoptosis signaling pathway", "PI3 kinase pathway", etc.
Species: This is the name of species you have chosen; in this case we chose "Homo sapiens".
Browse Using this folder tab and by selecting the ontology you are interested in, you can browse different classification. It is also possible to select more than one ontology; in this case, the results will meet the criteria from all the selections. You are able to see the association between ontology terms and PANTHER families, subfamilies and training sequences.
PANTHER website:
Sequence search By putting the protein sequence in the Sequence Search box, PANTHER will search against a library of family and subfamily HMMs, and return the subfamily that best matches the sequence. If you click on the subfamily name, it will give some details, e.g. the genes related to that subfamily and the ability to view the subfamily within larger family tree. By downloading the PANTHER scoring tool from download page, you will be able to score many sequences against PANTHER HMMs.
PANTHER website:
cSNP scoring Using this folder tab, you are able to do evolution analysis of coding SNPs. You must enter a protein sequence in the first box and the substitutions relative to this protein sequence in the second box; this substitutions should be entered in the standard amino acid substitution format, e.g. L46P. PANTHER will use an alignment of evolutionarily related proteins, calculate the substitution position-specific evolutionary conservation (subPSEC) and estimate the likelihood of this nonsynonymous coding SNP to lead a functional effect on the protein. This tool uses data from PANTHER version 6.1 for technical reasons. One of the new features of PANTHER is that if you want to analyze a lot of SNPs, you can go to the download page and download the PANTHER Coding Snp Analysis tool.
PANTHER website:
Keyword search Entering a search term in the keyword search box, PANTHER will give you the number of records matching your keyword for genes, families, pathways and ontology terms. You can filter them by determining the species of interest or by refining the search using other criteria. To view the details of the gene, you must click on the gene identifier. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Homocapsaicin**
Homocapsaicin:
Homocapsaicin is a capsaicinoid and analog and congener of capsaicin in chili peppers (Capsicum). Like capsaicin it is an irritant. Homocapsaicin accounts for about 1% of the total capsaicinoids mixture and has about half the pungency of capsaicin. Pure homocapsaicin is a lipophilic colorless odorless crystalline to waxy compound. On the Scoville scale it has 8,600,000 SHU (Scoville heat units). Homocapsaicin isolated from chili pepper has been found in two isomeric forms, both with a carbon-carbon double bond at the 6 position (numbered from the amide carbon) on the 10-carbon acyl chain. One isomer has an additional carbon, a methyl group, at the 8 position and the other has a methyl group at the 9 position. Homocapsaicin (6-ene-8-methyl) is the more abundant isomer. Homocapsaicin with the double bond at the 7 position has never been found in nature, though its structure is widely reported on the Internet and in the scientific literature. Details of this misidentification have been published. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Associative magic square**
Associative magic square:
An associative magic square is a magic square for which each pair of numbers symmetrically opposite to the center sum up to the same value. For an n × n square, filled with the numbers from 1 to n2, this common sum must equal n2 + 1. These squares are also called associated magic squares, regular magic squares, regmagic squares, or symmetric magic squares.
Examples:
For instance, the Lo Shu Square – the unique 3 × 3 magic square – is associative, because each pair of opposite points form a line of the square together with the center point, so the sum of the two opposite points equals the sum of a line minus the value of the center point regardless of which two opposite points are chosen. The 4 × 4 magic square from Albrecht Dürer's 1514 engraving Melencolia I – also found in a 1765 letter of Benjamin Franklin – is also associative, with each pair of opposite numbers summing to 17.
Existence and enumeration:
The numbers of possible associative n × n magic squares for n = 3,4,5,..., counting two squares as the same whenever they differ only by a rotation or reflection, are: 1, 48, 48544, 0, 1125154039419854784, ... (sequence A081262 in the OEIS)The number zero for n = 6 is an example of a more general phenomenon: associative magic squares do not exist for values of n that are singly even (equal to 2 modulo 4). Every associative magic square of even order forms a singular matrix, but associative magic squares of odd order can be singular or nonsingular. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Holoscenes**
Holoscenes:
Holoscenes is a multi-format work of installation art by Los Angeles artist Lars Jan.
Description:
Holoscenes, features a single totemic, aquarium-like sculpture sited in public space, standing thirteen feet tall and viewable from 360 degrees. In an allegory of the rising sea levels produced by climate change, the aquarium is animated by a powerful custom hydraulic system that pumps up to 15 tons of water in and out in less than a minute, creating a series of mini-floods to which the performers must adapt. Over the course of several hours, a series of performers play variously the guitar, sell fruit, don an abaya, uncoil a garden hose, and perform other familiar tasks as the water rises and falls around each in turn.
Exhibitions:
The piece has been shown/performed during The Commonwealth Games Festival 2018, Australia (2018); Times Square Arts & World Science Festival, New York (2017); Art Abu Dhabi NYU Abu Dhabi, UAE (2016); London's Burning Festival (2016); Art Basel, Miami Beach (2015); at the John and Mable Ringling Museum, Sarasota (2015); Nuit Blanche, Toronto (2014); at the Pasadena Museum of California Art (2015) and at Carrefour international de théâtre (2022). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.