text
stringlengths
60
353k
source
stringclasses
2 values
**Christopher Burge** Christopher Burge: Christopher Boyce Burge is Professor of Biology and Biological Engineering at Massachusetts Institute of Technology. Education: Burge completed his Bachelor of Science at Stanford University in 1990, and continued graduate studies in computational biology at Stanford University, gaining his PhD in 1997 under the supervision of Samuel Karlin. During his time at Stanford he was responsible for developing algorithms for GENSCAN used in gene prediction for example the initial analysis of the Human Genome Project. His PhD thesis was titled Identification of genes in human genomic DNA. Research: From 1997 to 1999 Burge worked as a postdoc in the laboratory of Phillip Allen Sharp, working in the fields of RNA splicing and molecular evolution. Burge joined the Massachusetts Institute of Technology in 1999 as a Bioinformatics Fellow. He became Assistant Professor in 2002, Associate Professor in 2004, was tenured in 2006, and was promoted to full Professor in 2010. He has been an Associate Member of the Broad Institute since 2004. His current research interests include genomics, RNA splicing and microRNA regulation.Burge has also served on the editorial boards of the academic journals RNA, PLOS Computational Biology, BMC Bioinformatics and BMC Genomics. Awards: In 2001 he was awarded the Overton Prize for Computational Biology by the International Society for Computational Biology. He was awarded a Searle Scholar Award in 2003 for his research in the computational biology of gene expression. In 2007 he was awarded the Schering-Plough Research Institute Award (now known as the ASBMB Young Investigator Award) by the American Society for Biochemistry and Molecular Biology for his outstanding research contributions to biochemistry and molecular biology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Median (geometry)** Median (geometry): In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect each other at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. Median (geometry): The concept of a median extends to tetrahedra. Relation to center of mass: Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle. Thus the object would balance on the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from. Equal-area division: Each median divides the area of the triangle in half; hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines which divide the area of the triangle into two equal parts do not pass through the centroid.) The three medians divide the triangle into six smaller triangles of equal area. Proof of equal-area property Consider a triangle ABC. Let D be the midpoint of AB¯ , E be the midpoint of BC¯ , F be the midpoint of AC¯ , and O be the centroid (most commonly denoted G). By definition, AD=DB,AF=FC,BE=EC . Thus [ADO]=[BDO],[AFO]=[CFO],[BEO]=[CEO], and [ABE]=[ACE] , where [ABC] represents the area of triangle △ABC ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height. Equal-area division: We have: [ABO]=[ABE]−[BEO] [ACO]=[ACE]−[CEO] Thus, [ABO]=[ACO] and [ADO]=[DBO],[ADO]=12[ABO] Since [AFO]=[FCO],[AFO]=12[ACO]=12[ABO]=[ADO] , therefore, [AFO]=[FCO]=[DBO]=[ADO] Using the same method, one can show that [AFO]=[FCO]=[DBO]=[ADO]=[BEO]=[CEO] Three congruent triangles In 2014 Lee Sallows discovered the following theorem: The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the two triangles in each such pair are rotated about their common midpoint until they meet so as to share a common side, then the three new triangles formed by the union of each pair are congruent. Formulas involving the medians' lengths: The lengths of the medians can be obtained from Apollonius' theorem as: where a,b, and c are the sides of the triangle with respective medians ma,mb, and mc from their midpoints. These formulas imply the relationships: Other properties: Let ABC be a triangle, let G be its centroid, and let D, E, and F be the midpoints of BC, CA, and AB, respectively. For any point P in the plane of ABC then The centroid divides each median into parts in the ratio 2:1, with the centroid being twice as close to the midpoint of a side as it is to the opposite vertex. Other properties: For any triangle with sides a,b,c and medians ma,mb,mc, The medians from sides of lengths a and b are perpendicular if and only if a2+b2=5c2. The medians of a right triangle with hypotenuse c satisfy ma2+mb2=5mc2. Any triangle's area T can be expressed in terms of its medians ma,mb , and mc as follows. If their semi-sum (ma+mb+mc)/2 is denoted by σ then Tetrahedron: A tetrahedron is a three-dimensional object having four triangular faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median of the tetrahedron. There are four medians, and they are all concurrent at the centroid of the tetrahedron. As in the two-dimensional case, the centroid of the tetrahedron is the center of mass. However contrary to the two-dimensional case the centroid divides the medians not in a 2:1 ratio but in a 3:1 ratio (Commandino's theorem).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abandoned mine** Abandoned mine: An abandoned mine refers to a former mining or quarrying operation that is no longer in use and has no responsible entity to finance the cost of remediation and/or restoration of the mine feature or site. Such mines are typically left unattended and may pose safety hazards or cause environmental damage without proper maintenance. The term incorporates all types of old mines, including underground shaft mines and drift mines, and surface mines, including quarries and placer mining. Typically, the cost of addressing the mine's hazards is borne by the public/taxpayers/the government.An abandoned mine may be a hazard to health, safety or environment. Hazards: Abandoned mines contain many hazards, including: Subsidence, or collapsing ground Blasting caps and other unexploded explosives Blackdamp, which accumulates in old mines and can cause suffocation Hidden mine shafts, often hidden beneath bushes, grasses, and other vegetation that has grown up around the mine entrance Unstable roofs and passageways, which are prone to cave-ins Abandoned mines in the United States: Definitions Department of the Interior – Bureau of Land Management – Abandoned mines are those mines that were abandoned before January 1, 1981, the effective date of the Bureau of Land Management's Surface Management regulations issued under the authority of the Federal Land Policy and Management Act of 1976, as amended (43 U.S.C. 1701 et seq.) Environmental Protection Agency – Abandoned mine lands (AMLs) are those lands, waters, and surrounding watersheds where extraction, beneficiation or, processing of ores and minerals has occurred.In the United States, there are thousands of abandoned mines. The precise number of abandoned mines in the United States remains unknown, ranging "from the National Park Service's tally of 2,500 on its lands to the Mineral Policy Center's assessment of 560,000 abandoned mines on public and privately owned lands." Many of these abandoned mines are associated with abandoned neighboring towns often referred to as ghost towns. Experts strongly warn against entering or exploring old or abandoned mines. In California, Nevada, Colorado, New Mexico, and Arkansas, there are over 6,500 abandoned mines, according to infographic. Abandoned mines in the United States: In the U.S., the estimation is that approximately 80% of the abandoned mine lands (AML) sites pose physical safety hazards and require more work in determining the proper safety of these lands.Every year, dozens are injured or killed in recreational accidents on mine property. While exploring abandoned mines can be dangerous, the majority of deaths on mine property are actually unrelated to mine exploration. The leading causes of accidental deaths on abandoned mine properties are drownings in open quarries and ATV accidents. These types of accidents often occur when people engage in recreational activities on abandoned mine sites without taking proper precautions or following safety guidelines. It is important for individuals to recognize the risks associated with these activities and to take steps to ensure their safety. Property owners and managers also have a role to play in preventing accidents by implementing safety measures and providing adequate warning signs and barriers. The U.S. Department of Labor notes that since 1999, "more than 200 people have died in recreational accidents at the surface and underground active and abandoned operations across the country." Due to these circumstances, the Mine Safety and Health Administration launched the "Stay Out – Stay Alive" campaign, which is a national public awareness campaign aimed at warning and educating children and adults about the dangers of exploring and playing on active and abandoned mine sites. Abandoned mines in the United States: In the U.S., the Abandoned Mine Land Initiative, launched by the Western Governor's Association and the National Mining Association is also an effort focusing on reporting the number of high-priority AML sites. The initiative identifies, measures, and reports on the progress of current reclamation cleanup programs on an annual basis. In the Americas region, the United Nations Environment Programme (UNEP) and the Chilean Copper Commission (COHILCO) co-hosted a workshop to address the problem of abandoned or "orphaned" mines. Including a representative from the UN, ten countries were represented from North, Central, and South America with an eleventh participant being Japan. Abandoned mines in the United States: Legislation Surface Mining Control and Reclamation Act It can be hazardous and detrimental to reside close to an abandoned coal mining site. The Surface Mining Control and Reclamation Act (SMCRA) was passed in 1977 in two parts: one to control the effects of active mines, and one to regulate abandoned mines. SMCRA also initiated an abandoned mine land fund, in which a fee was charged for each ton of coal produced. This revenue was distributed in part to the United Mine Workers Association (UMWA) towards retirement funds, as well as to the Office of Surface Mining Reclamation and Enforcement (OSMRE) to continue operations. There is still around $2 billion in undistributed funds thus far. Abandoned mines in Canada: Definitions National Orphaned/Abandoned Mines Initiative – Orphaned or abandoned mines are those mines for which the owner cannot be found or, the owner is financially unable or unwilling to carry out clean-up. They pose environmental, health, safety, and economic problems to communities, the mining industry, and governments in many countries, including Canada. The Ontario Mining Act describes "abandoned mines" as old land previously used for coal mining unused due to hazardous environmental and health effects.There are approximately 10,139 abandoned mines currently in Canada. Research is being done to utilize geothermal systems in these abandoned mines as a renewable heating source and has shown to be quite cost-efficient. Reuse of abandoned mines: Abandoned mines may be reused for other purposes, such as pumped-storage hydropower.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nude calendar** Nude calendar: Nude calendars are a type of wall calendar that feature nude models in a variety of scenes and locations. Predominantly in the United Kingdom, nude calendars are produced to raise money for charity. Types: Calendars featuring pin-up models Commercial advertising on calendars started in the late 19th century and has often been linked to pictures or photographs of pin-up models. The products being advertised may be incorporated via product placement in the pictures themselves or separate via logos and corporate in-house style. Calendars featuring female nudes became a common feature in workplaces which were predominantly male (e.g. garages, car dealerships, etc.), although many employers have banned or restricted their display considering them a form of sex discrimination. Types: An example is the Pirelli Calendar. Sports nude calendars Some sports teams have produced nude calendars, often to raise their profile or increase funding. Examples include the Australian women's football team prior to the 2000 Summer Olympics in Sydney, the Canadian cross-country ski team in 2001 and 2002, and a group of Canadian women biathletes in 2008. Types: Charity nude calendars The first nude charity calendar was made by a group of middle-aged Englishwomen, members of a local branch of the Women's Institute, who were posing nude to raise funds for Leukaemia Research. The calendar was released in 1999, and became an international sensation, and also inspired the movie Calendar Girls. Following this lead, charity nude calendars proliferated in the 2000s. Proceeds usually go to various health or social causes. Participants may include artists, celebrities, sportsmen and sportswomen, firefighters, military forces, the police, or members of a group such as farmers, Women's Institute members who wish to raise funds for a chosen charity. Types: The women's rugby match Oxford and Cambridge, which was played at Twickenham in 2015, was publicised by the Oxford team making a nude calendar.Successful charity nude calendars include: Rylstone Women's Institute 2000 Alternative WI Calendar, the first ever nude charity calendar Dieux du Stade (France) Men of the Long Tom Grange (United States) in aid of Junction City, Oregon public schools (2004-2006) League of Their Own (Australia) in aid of the Koori Kids Foundation (2006) Naked Rugby League (Australia) in aid of the National Breast Cancer Foundation of Australia (2007/2008) Naked For A Cause (Australia) in aid of breast cancer research (2008) Gods of Football (Australia) featuring Australian Football League and Australian Rugby League players in aid of the McGrath Foundation (2009) University of Warwick Boat Club (United Kingdom) in aid of Macmillan Cancer Support (2013) EastEnders (United Kingdom) featuring actors and actresses from the BBC TV show in aid of Children in Need (2015) The Magnet Tavern, in Lincolnshire (United Kingdom), in aid of the Air Ambulance (2016)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vruk** Vruk: The Vruk is a proprietary bass drum pedal design produced by Vruk Corporation. The term vruk also refers to playing techniques associated with this design, and related accessories produced by the corporation for attachment to other brands of pedal. Proponents claim that the technique gives greater control and in particular allows greater speed. The name VRUK (capitalised) is also used by Vineyard Records UK.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-brand** Self-brand: Throughout the long history of consumer research, there has been much interest regarding how consumers choose which brand to buy and why they continue to purchase these brands. Self-branding describes the process in which consumers match their own self-concept with the images of a certain brand. Self-brand: People engaged in consumption do not merely buy certain products to satisfy basic needs. In fact, consumer buying habits are at a much deeper level. Owning a certain brand can help consumers to express and build their own self-concept. Specifically, consumers will often only purchase certain trademarks when he/she finds a match between the brand image (communicated through advertisement, design of retail shop, or even package design) and his/her own self-concept. Thereby, the value of a brand also depends on its ability to help consumer to build and create self-concept. Formation of connections: Based on self-congruity theory The above explanation for self-branding can be summarized by Sirgy's self-congruity theory. It is proposed that consumer behavior is partially determined by the similarity between consumers' psychological comparisons of the brand-user-image. This self-congruity affects consumption behavior of consumers through motives such as need for self-consistency (e.g. "I am a good student because I work hard to prepare for examinations and I always get good grades") and self-esteem. On the other hand, high self-congruity occurs when the consumers find appropriate match between their own self-image and the brand-image. Only high self-congruity would help consumers maintain and enhance self in a positive direction. Further from the above notions, high self-congruity will lead to positive attitudes towards the brand and repeated purchase. Formation of connections: Brand evaluation Besides assisting consumer to choose which product and brand to buy, the matching process between self-concept and image of brand and product also determines how consumers evaluate the brand and product. When we say that a brand has a positive brand-image, it means that the brand has established some strong, favorable and unique associations with the consumer's self-image (e.g. iPods have a strong and explicit image of being trendy, fashionable and high-tech, a combination of brand image that is unique and valued by young people). These strong, favorable and unique associations can be mainly divided into two parts. They are image of users and the psychological benefits experienced by the users in buying this particular brand or product. Firstly, image of users means that when consumer evaluate the brand they will image the typical user of this particular brand and see whether they are similar to the typical user. Demographic and psychological profile of the typical user is usually a good source of information for consumer to make these comparisons. (e.g. if someone perceived themselves as a trendy youngster and valued advanced technology, the chance that they will buy an iPod for their own use is very high). Secondly, psychological benefits experienced by consumers include increase recognition by the peer group (i.e. social approval) and expression of how one would like other people to see and think of oneself (i.e. personal expression). Constructing a self-concept: When the set of brand associations are linked or connected to the self, these associations can help consumers achieve certain goals. These goals include what they might become, what they would like to become, and what they are afraid of becoming. People are motivated to create a favorable and consistent self-identity based on self-enhancement (i.e. people over-emphasize favorable evaluations and minimize critical assessment of themselves) self-verification (i.e. people want to be known and understood by others according to their firmly held beliefs and feelings about themselves respectively). Constructing a self-concept: Self-enhancement In self-enhancement, the impressions individuals hold about themselves are often biased towards a positive direction. Therefore, they over-emphasize favorable evaluations and minimize critical assessment of self. People use brand to represent favorable self-images to others or to themselves. Constructing a self-concept: The first aspect in self-enhancement is the need to maintain and enhance self-esteem. Another aspect is about social interaction (e.g. staff meetings). In terms of impression management, people actively manage their presentation (e.g. the brand of garment) in front of other people so as to maximize the opportunity to gain positive feedback. On the other hand, people are also motivated to create a good impression (e.g. wearing a watch of big brand) in order to gain social approval and intrinsic satisfaction. This is especially true when the person has very high self-esteem. Constructing a self-concept: Self-verification Self-verification refers to seek accurate information about self. In general, people seek and interpret situations and behavioral strategies that match their present self-conceptions. In contrast, they avoid situations and behaviors that derive contradictory information.Self-verification can be achieved by two primary strategies. The first strategy is seeing more self-confirmatory evidence than actually exists. The second strategy is striving to affect the reactions of other people by developing a self-confirmatory environment, which includes displaying identity cues such as driving a certain brand of automobile.It is found that people choose products and brand by imagining the prototypical users for each item in the choice set and choosing the items that maximizes their similarity to a desired prototypical user. Constructing a self-concept: Compatibility of self-enhancement and self-verification It seems that it is incompatible to seek feedback that is favorable (self-enhancement) and at the same time seek accurate feedback regardless of favorability (self-verification). Social psychology shows that there are factors affecting the relative degree to which each feedback satisfied, e.g. cognitive resources, stable versus malleable aspects of personality, intuitive-experiential versus analytical-rational modes of thought, or cognitive versus affective processes. More specifically, it is found that people with high self-esteem, high self-monitors (i.e. regulate their own behavior in order to "look good"), narcissists (i.e. self-love), and Type B personalities (i.e. patient, relaxed, and easy-going) are more likely than their counterparts to be influenced by self-enhancement motives as opposed to self-verification motives. Use of YouTube to promote a brand: YouTube has become an increasingly popular platform for self-branding. As self-branding is known for its strategic placement of oneself through a media outlet, it is a commonplace for many individuals to post videos, clips, tutorials, and other visual aides under their channels. YouTube especially gives the individual the opportunity to upload and control the information that is distributed on themselves – as they are the ones creating, editing, and uploading the content. By controlling the information being displayed on their channels, it is possible to promote and market themselves on a wider scale as YouTube has millions cross paths with their site daily. Through self-branding, developing one's self is not the only aspect of this self-marketing tactic. There is also the aspect of "authenticity" to validate their specialties, and also a "business-targeted self presentation". This can be done through addressing the proper credentials or years of practice that has past throughout their practice of the topic or talent they relate themselves to. Use of YouTube to promote a brand: The concept of self-branding a product can be seen particularly in the case of physical exercise gurus, beauty gurus, health gurus, food expert, and other gurus as well. This concept can be seen in the successful Michelle Phan, who is a beauty guru on YouTube. Michelle Phan has a record of revealing intimate facts about her life through interviews, blogs, and YouTube videos. On her website michellephan.com, she has created an "about me" section to further her self-branding in efforts to give a brief introduction about herself and her passion for beauty related topics, tips, and advice. She states, "I'm passionate about being a makeup artist and teaching others how to look and feel fabulous in their own skin". She also claims to help women raise their self-esteem and confidence levels. This is all done through "a safe space where makeup enthusiasts, fashion lovers, trendsetters, and beauty aficionados alike, can find inspiration, how-to advice, style news, easy DIY ideas, and tips".YouTube gurus like Michelle Phan carry an image and a created identity. In an interview with fashionista.com, Michelle Phan stated that success in the blogging and video industry has to do with sending a message, vision, and brand identity. She has said to be cautious of what she is affiliated with, as it can affect the relationship she has formed with her followers. Through presenting an identity through her YouTube channel, she and other YouTube gurus alike have launched their own products. For instance, Michelle Phan has created a line of cosmetics that include eye shadows, lipsticks, eyeliners, foundation, contour sticks, concealer, and other forms of makeup that are related to the content she chooses to upload on her personal YouTube channel. Michelle Phan's ability to self-brand through media such as YouTube gives her a competitive advantage. YouTube reports that "more than 1 billion unique users visit YouTube each month". It also states "over 6 billion hours of video are watched each month on YouTube". Michelle Phan's channel is one of the channels YouTube claims is among the "thousands of channels that are making six figures a year". Development of concepts: In the process of consumer socialization, self-brand connections develop throughout childhood as a result of developmental changes. Major changes occur in the representation of self-concepts between early childhood and adolescence. As children grow older, they conceptualize the self in less concrete and more abstract terms. For example, a concrete thinker can recognize that John likes clothes; more abstract thinker can reflect on emotions, like affection. Self-concepts become more complex as children mature, with a greater variety of self-constructs used to describe the self. In the Dixon and Street (1975) study, possessions were not part of self-concept descriptions for 6- to 8-year-olds but surfaced and increased in importance from 8 to 16 years of age. Development of concepts: Children recognize brand at an early age, as young as 3 or 4 years of age. John and Sujan (1990) found that children 4–7 years of age used perceptual cues (shape, package color), whereas older children (8–10 years) used no observable conceptual cues (taste) as a basis for classifying products. They, in middle childhood (7–8 years of age) can name multiple brand products and request products by brand name. Their comparisons of the self-concept with brand take place on a concrete level that self-brand connections are straightforward in nature. For example, self-brand connections might be made on the basis of simply being familiar with or owning a brand. Development of concepts: Late childhood (10–12 years of age) begin heightened appreciation for subtle meanings imbedded in brand images converges with a trend toward defining the self in more abstract and complex terms. Brands gain recognition as useful devices for characterizing the self in terms of personality traits, user characteristics, and reference groups. Development of concepts: As children move into adolescence, children have deeper self-brand connections because they think about brand in a very specific way—as having personalities and symbolizing group membership—that provides a natural link to their self-concepts. A greater understanding of the self, combined with social pressures to "fit in" and signal group membership, leads adolescents to be more vigilant about the social implications of owning certain brand. As a result, adolescents possess an even larger number of self-brand connections, which may be even more complex in nature. Reference group: As mentioned in the social comparison theory proposed by social psychologist Leon Festinger in 1954, humans have a drive to evaluate themselves by examining their opinions and abilities in comparison to others. Consumers often use the images of other brands' users as a source of information for evaluating their own beliefs and perceptions about their own and others' social identities. They also actively construct self-concept using brand associations that arise through reference group. Reference group: In many consumer researches, reference group is a key concept for demonstrating the congruency between group membership and brand usage. It refers to the social groups that are important to a consumer and against which he/she compares oneself. With different personal goals, individuals would take different types of reference groups. For example, if someone would like to verify his own current social identities, he tends to compare himself with a 'member group', to which it supposes he belongs to. For example, if a person considers himself to be intellectual and his member group of intellectuals tends to drive a Volvos, he may choose to drive Volvo too. Similarly, an 'aspiration group' is another type of reference group to which an individual aspires to belong. If a consumer wishes to be more hip, and he sees hip people wearing Versace clothing, he may choose to wear Versace clothing in an attempt to appropriate the hip associations of that brand. Use: On the marketing level, companies gain an enduring competitive advantage by utilizing the association between brand and self-concept. This type of association is difficult for competitors to imitate. For example, in a sport consumption context, when consumer fans identify with the team (i.e., a branded organization) and rally together in expectation of victory, the team image is emphasized. On the individual level, brand symbolism provides moderation effects for in-group and out-group association. For in-groups, a symbolic brand has a stronger communicating effect than a non-symbolic brand; for out-groups, only a symbolic brand used to differentiate one from out-group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Id Tech 7** Id Tech 7: id Tech 7 is a multiplatform proprietary game engine developed by id Software. As part of the id Tech series of game engines, it is the successor to id Tech 6. The software was first demonstrated at QuakeCon 2018 as part of the id Software announcement of Doom Eternal. Technology: id Tech 7 features ten times the geometric detail, and higher texture fidelity of id Tech 6. Moreover the capabilities of the game engine allow it to have a new system called "Destructible Demons", in which enemies' bodies become progressively destroyed and deteriorated in combat as they suffer damage. On PC, id Tech 7 supports Vulkan rendering only. Ray tracing and DLSS were added in June 2021. According to engine developer Axel Gneiting, the engine doesn't have a "main thread"; everything is implemented as jobs. Technology: Improvements in comparison to id Tech 6 1 million fewer lines of code, due in part to the removal of the OpenGL render-engine Unified HDR lighting and shadowing Full HDR-support on PS4, PS4 Pro, PS5, Xbox One S, Xbox One X, Xbox Series S, Xbox Series X, PC and Stadia Multi PBR material compositing, blending, and painting Increased texture fidelity and geometric detail due to removal of MegaTexture pipeline, used since id Tech 4 Enhanced global illumination quality Majorly improved particle system as more particles are running on the GPU, which allows for bigger explosions, more atmospheric volumetrics and more vibrant particle effects The framerate limit has been increased to 1000 FPS. The frame limit was 250 FPS in id Tech 6. Technology: Rewritten jobs-system to use all available CPU-cores more efficiently Improved post-processing effects, including more detailed anti-aliasing and enhanced motion blur Support for gameplay areas twice the size of those in id Tech 6 Improved image streaming Expanded decal system Improved LOD system New GPU triangle-, light- and occlusion-culling system to not render what isn't on-screen Dramatically improved compression Improved level loading times, including after death screens DLSS 2.3.0 Ray-traced reflections on PlayStation 5, Xbox Series X (not available on Xbox Series S) and PCs with hardware accelerated ray tracing Variable rate shading on Xbox Series X and Xbox Series S Games using id Tech 7: Doom Eternal (2020) – id Software
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TASB (psychedelics)** TASB (psychedelics): TASB, or thioasymbescaline, is a series of lesser-known psychedelic drugs similar in structure to asymbescaline and to mescaline. They were first synthesized by Alexander Shulgin and written up in his book PiHKAL (Phenethylamines i Have Known And Loved). Very little is known about their dangers or toxicity. TASB compounds: 3-TASB Dosage: 160 mg or greater Duration: 10–18 hours Effects: Mild stimulative effects 4-TASB Dosage: 60–100 mg Duration: 10–15 hours Effects: Negative effects 5-TASB Dosage: 160 mg or greater Duration: 8 hours Effects: Warmth at extremities, diarrhea
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sequence assembly** Sequence assembly: In bioinformatics, sequence assembly refers to aligning and merging fragments from a longer DNA sequence in order to reconstruct the original sequence. This is needed as DNA sequencing technology might not be able to 'read' whole genomes in one go, but rather reads small pieces of between 20 and 30,000 bases, depending on the technology used. Typically, the short fragments (reads) result from shotgun sequencing genomic DNA, or gene transcript (ESTs). Sequence assembly: The problem of sequence assembly can be compared to taking many copies of a book, passing each of them through a shredder with a different cutter, and piecing the text of the book back together just by looking at the shredded pieces. Besides the obvious difficulty of this task, there are some extra practical issues: the original may have many repeated paragraphs, and some shreds may be modified during shredding to have typos. Excerpts from another book may also be added in, and some shreds may be completely unrecognizable. Genome assemblers: The first sequence assemblers began to appear in the late 1980s and early 1990s as variants of simpler sequence alignment programs to piece together vast quantities of fragments generated by automated sequencing instruments called DNA sequencers. As the sequenced organisms grew in size and complexity (from small viruses over plasmids to bacteria and finally eukaryotes), the assembly programs used in these genome projects needed increasingly sophisticated strategies to handle: terabytes of sequencing data which need processing on computing clusters; identical and nearly identical sequences (known as repeats) which can, in the worst case, increase the time and space complexity of algorithms quadratically; DNA read errors in the fragments from the sequencing instruments, which can confound assembly.Faced with the challenge of assembling the first larger eukaryotic genomes—the fruit fly Drosophila melanogaster in 2000 and the human genome just a year later,—scientists developed assemblers like Celera Assembler and Arachne able to handle genomes of 130 million (e.g., the fruit fly D. melanogaster) to 3 billion (e.g., the human genome) base pairs. Subsequent to these efforts, several other groups, mostly at the major genome sequencing centers, built large-scale assemblers, and an open source effort known as AMOS was launched to bring together all the innovations in genome assembly technology under the open source framework. EST assemblers: Expressed sequence tag or EST assembly was an early strategy, dating from the mid-1990s to the mid-2000s, to assemble individual genes rather than whole genomes. The problem differs from genome assembly in several ways. The input sequences for EST assembly are fragments of the transcribed mRNA of a cell and represent only a subset of the whole genome. A number of algorithmical problems differ between genome and EST assembly. For instance, genomes often have large amounts of repetitive sequences, concentrated in the intergenic regions. Transcribed genes contain many fewer repeats, making assembly somewhat easier. On the other hand, some genes are expressed (transcribed) in very high numbers (e.g., housekeeping genes), which means that unlike whole-genome shotgun sequencing, the reads are not uniformly sampled across the genome. EST assemblers: EST assembly is made much more complicated by features like (cis-) alternative splicing, trans-splicing, single-nucleotide polymorphism, and post-transcriptional modification. Beginning in 2008 when RNA-Seq was invented, EST sequencing was replaced by this far more efficient technology, described under de novo transcriptome assembly. Types of sequence assembly: There are three approaches to assembling sequencing data: De-novo: assembling sequencing reads to create full-length (sometimes novel) sequences, without using a template (see de novo sequence assemblers, de novo transcriptome assembly) Mapping/Aligning: assembling reads by aligning reads against a template (AKA reference). The assembled consensus may not be identical to the template. Types of sequence assembly: Reference-guided: grouping of reads by similarity to the most similar region within the reference (step wise mapping). Reads within each group are then shortened down to mimic short reads quality. A typical method to do so is the k-mer approach. Reference-guided assembly is most useful using long-reads.Referenced-guided assembly is a combination of the other types. This type is applied on long reads to mimic short reads advantages (i.e. call quality). The logic behind it is to group the reads by smaller windows within the reference. Reads in each group will then be reduced in size using the k-mere approach to select the highest quality and most probable contiguous (contig). Contigs will then will be joined together to create a scaffold. The final consense is made by closing any gaps in the scaffold. De-novo vs. mapping assembly: In terms of complexity and time requirements, de-novo assemblies are orders of magnitude slower and more memory intensive than mapping assemblies. This is mostly due to the fact that the assembly algorithm needs to compare every read with every other read (an operation that has a naive time complexity of O(n2)). Current de-novo genome assemblers may use different types of graph-based algorithms, such as the: Overlap/Layout/Consensus (OLC) approach, which was typical of the Sanger-data assemblers and relies on an overlap graph. De-novo vs. mapping assembly: de Bruijn Graph (DBG) approach, which is most widely applied to the short reads from the Solexa and SOLiD platforms. It relies on K-mer graphs, which performs well with vast quantities of short reads. De-novo vs. mapping assembly: Greedy graph-based approach, which may also use one of the OLC or DBG approaches. With greedy graph-based algorithms, the contigs grow by greedy extension, always taking on the read that is found by following the highest-scoring overlap.Referring to the comparison drawn to shredded books in the introduction: while for mapping assemblies one would have a very similar book as a template (perhaps with the names of the main characters and a few locations changed), de-novo assemblies present a more daunting challenge in that one would not know beforehand whether this would become a science book, a novel, a catalogue, or even several books. Also, every shred would be compared with every other shred. De-novo vs. mapping assembly: Handling repeats in de-novo assembly requires the construction of a graph representing neighboring repeats. Such information can be derived from reading a long fragment covering the repeats in full or only its two ends. On the other hand, in a mapping assembly, parts with multiple or no matches are usually left for another assembling technique to look into. Sequence assembly pipeline (bioinformatics): In general, there are three steps in assembling sequencing reads into a scaffold: 1) Pre-assembly: this step is essential to ensure the integrity of downline analysis such as variant calling or final scaffold sequence. This step consists of two chronological workflow: A) Quality check: Depending on the types of sequencing technology, different errors might arise that would lead to a false base call. For example, sequencing "NAAAAAAAAAAAAN" and "NAAAAAAAAAAAN" which include 12 adenine might be wrongfully called with 11 adenine instead. Sequencing a highly repetitive segment of the target DNA/RNA might result in a call that is one short or one more base. Read quality is typically measured by Phred which is an encoded score of each nucleotide quality within a read's sequence. Some sequencing technologies such as PacBio do not have a scoring method for the their sequenced reads. A common tool used in this step is FastQC.B) Filtering of reads: Reads that failed to pass the quality check should be removed from the FastQ file to get the best assembly contigs. 2) Assembly: during this step, reads alignment will be utilized with different criteria to map each read to the possible location. The predicted position of a read is based on either how much of its sequence aligns with other reads or a reference. Different alignment algorithms are used for reads from different sequencing technologies. Some of the commonly used approaches in the assembly are de Bruijn graph and overlapping. Read length, coverage, quality, and the sequencing technique used plays a major role in choosing the best alignment algorithm in the case of Next Generation Sequencing. On the other hand, algorithms aligning 3rd generation sequencing reads requires advance approaches to account for the high error rate associated with them. 3) Post assembly: This step focusing on extracting valuable information from the assembled sequence. Comparative genomics, and population analysis are examples go post-assemble analysis. Influence of technological changes: The complexity of sequence assembly is driven by two major factors: the number of fragments and their lengths. While more and longer fragments allow better identification of sequence overlaps, they also pose problems as the underlying algorithms show quadratic or even exponential complexity behaviour to both number of fragments and their length. And while shorter sequences are faster to align, they also complicate the layout phase of an assembly as shorter reads are more difficult to use with repeats or near identical repeats. Influence of technological changes: In the earliest days of DNA sequencing, scientists could only gain a few sequences of short length (some dozen bases) after weeks of work in laboratories. Hence, these sequences could be aligned in a few minutes by hand. Influence of technological changes: In 1975, the dideoxy termination method (AKA Sanger sequencing) was invented and until shortly after 2000, the technology was improved up to a point where fully automated machines could churn out sequences in a highly parallelised mode 24 hours a day. Large genome centers around the world housed complete farms of these sequencing machines, which in turn led to the necessity of assemblers to be optimised for sequences from whole-genome shotgun sequencing projects where the reads are about 800–900 bases long contain sequencing artifacts like sequencing and cloning vectors have error rates between 0.5 and 10%With the Sanger technology, bacterial projects with 20,000 to 200,000 reads could easily be assembled on one computer. Larger projects, like the human genome with approximately 35 million reads, needed large computing farms and distributed computing. Influence of technological changes: By 2004 / 2005, pyrosequencing had been brought to commercial viability by 454 Life Sciences. This new sequencing method generated reads much shorter than those of Sanger sequencing: initially about 100 bases, now 400-500 bases. Its much higher throughput and lower cost (compared to Sanger sequencing) pushed the adoption of this technology by genome centers, which in turn pushed development of sequence assemblers that could efficiently handle the read sets. The sheer amount of data coupled with technology-specific error patterns in the reads delayed development of assemblers; at the beginning in 2004 only the Newbler assembler from 454 was available. Released in mid-2007, the hybrid version of the MIRA assembler by Chevreux et al. was the first freely available assembler that could assemble 454 reads as well as mixtures of 454 reads and Sanger reads. Assembling sequences from different sequencing technologies was subsequently coined hybrid assembly. Influence of technological changes: From 2006, the Illumina (previously Solexa) technology has been available and can generate about 100 million reads per run on a single sequencing machine. Compare this to the 35 million reads of the human genome project which needed several years to be produced on hundreds of sequencing machines. Illumina was initially limited to a length of only 36 bases, making it less suitable for de novo assembly (such as de novo transcriptome assembly), but newer iterations of the technology achieve read lengths above 100 bases from both ends of a 3-400bp clone. Announced at the end of 2007, the SHARCGS assembler by Dohm et al. was the first published assembler that was used for an assembly with Solexa reads. It was quickly followed by a number of others. Influence of technological changes: Later, new technologies like SOLiD from Applied Biosystems, Ion Torrent and SMRT were released and new technologies (e.g. Nanopore sequencing) continue to emerge. Despite the higher error rates of these technologies they are important for assembly because their longer read length helps to address the repeat problem. It is impossible to assemble through a perfect repeat that is longer than the maximum read length; however, as reads become longer the chance of a perfect repeat that large becomes small. This gives longer sequencing reads an advantage in assembling repeats even if they have low accuracy (~85%). Assembly algorithms: Different organisms have a distinct region of higher complexity within their genome. Hence, the need of different computational approaches is needed. Some of the commonly used algorithms are: Graph Assembly: is based on Graph theory in computer science. The de Bruijn Graph is an example of this approach and utilizes k-mers to assemble a contiguous from reads. Greedy Graph Assembly: this approach score each added read to the assembly and selects the highest possible score from the overlapping region.Given a set of sequence fragments, the object is to find a longer sequence that contains all the fragments (see figure under Types of Sequence Assembly): Сalculate pairwise alignments of all fragments. Choose two fragments with the largest overlap. Merge chosen fragments. Repeat step 2 and 3 until only one fragment is left.The result might not be an optimal solution to the problem. Programs: For a lists of de-novo assemblers, see De novo sequence assemblers. For a list of mapping aligners, see List of sequence alignment software § Short-read sequence alignment. Some of the common tools used in different assembly steps are listed in the following table:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nolisting** Nolisting: Nolisting is the name given to a technique to defend electronic mail domain names against e-mail spam.Each domain name on the internet has a series of one or more MX records specifying mail servers responsible for accepting email messages on behalf of that domain, each with a preference. Nolisting is simply the adding of an MX record pointing to a non-existent server as the "primary" (i.e. that with the lowest weighted value) - which means that an initial mail contact will always fail. Many spam sources don't retry on failure, so the spammer will move on to the next victim - while legitimate email servers should retry the next higher numbered MX, and normal email will be delivered with only a small delay. Implementation: A simple example of MX records that demonstrate the technique: MX 10 dummy.example.com. MX 20 real-primary-mail-server.example.com. This defeats spam programs that only connect to the highest priority (lowest numbered) MX and do not follow the standard error-handling of retrying the next priority MX. Drawbacks: The technique relies on spammers using simple software that doesn't retry the next priority MX, and so becomes ineffective if or when spammers begin using more sophisticated software. Drawbacks: Some legitimate SMTP applications are also very simple and only send to the lowest numbered MX record. This might be the case with simple devices such as printers or data loggers, or with older legacy software. Mail from them will also fail unless there is some mechanism to allow a "whitelist" of IPs access to the mailserver via the lowest numbered MX record. Drawbacks: It is important that the highest priority (lowest numbered) MX should be completely unresponsive on port 25. If it is open and responds with a 4xx error, (i.e. "retry later"), then email from some MTAs (such as qmail), may be lost if they do not step to the next MX record, but instead wait and continually retry the first one. Similar techniques: There are alternate techniques that suggest "sandwiching" the valid MX records between non-responsive ones. Some variants also suggest configuring the highest-numbered hosts to always return 4xx errors (i.e. "retry later").A simple example of MX records that demonstrate the technique: MX 10 dummy1.example.com. MX 20 real-primary-mail-server.example.com. MX 30 dummy2.example.com. Greylisting also relies on the fact that spammers often use custom software which will not persevere to deliver a message in the correct RFC-compliant way.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mir-744 microRNA precursor family** Mir-744 microRNA precursor family: In molecular biology mir-744 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. miR-744 and cancer in mice: miR-744 plays a role in tumour development and growth in mouse cell lines. Its expression induces cyclin B1 expression, whilst knockdown sees a resultant decreased level of mouse cyclin B through the Ccnb1 gene. Short-term overexpression of miR-744 in mouse cell lines has been seen to enhance cell proliferation, whilst chromosomal instability and in vivo suppression are concurrent with a prolonged expression. TGF-β1 repression: Multiple miR-744 binding sites have been identified in the proximal 3' untranslated region of transforming growth factor beta 1 (TGF-β1). Direct targeting of TGF-β1 by miR-744 has been identified, and transfection is seen to inhibit endogenous TGF-β1 synthesis by directing post-transcriptional regulation. EEF1A2 repression: miR-744 directly targets translation elongation factor and known protooncogene EEF1A2. mIR-744 also upregulates during resveratrol treatment of MCF7 breast cancer cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Degrees of freedom problem** Degrees of freedom problem: In neuroscience and motor control , the degrees of freedom problem or motor equivalence problem states that there are multiple ways for humans or animals to perform a movement in order to achieve the same goal. In other words, under normal circumstances, no simple one-to-one correspondence exists between a motor problem (or task) and a motor solution to the problem. The motor equivalence problem was first formulated by the Russian neurophysiologist Nikolai Bernstein: "It is clear that the basic difficulties for co-ordination consist precisely in the extreme abundance of degrees of freedom, with which the [nervous] centre is not at first in a position to deal."Although the question of how the nervous system selects which particular degrees of freedom (DOFs) to use in a movement may be a problem to scientists, the abundance of DOFs is almost certainly an advantage to the mammalian and the invertebrate nervous systems. The human body has redundant anatomical DOFs (at muscles and joints), redundant kinematic DOFs (movements can have different trajectories, velocities, and accelerations and yet achieve the same goal), and redundant neurophysiological DOFs (multiple motoneurons synapsing on the same muscle, and vice versa). How the nervous system "chooses" a subset of these near-infinite DOFs is an overarching difficulty in understanding motor control and motor learning. History: The study of motor control historically breaks down into two broad areas: "Western" neurophysiological studies, and "Bernsteinian" functional analysis of movement. The latter has become predominant in motor control, as Bernstein's theories have held up well and are considered founding principles of the field as it exists today. History: Pre-Bernstein In the latter 19th and early 20th centuries, many scientists believed that all motor control came from the spinal cord, as experiments with stimulation in frogs displayed patterned movement ("motor primitives"), and spinalized cats were shown to be able to walk. This tradition was closely tied with the strict nervous system localizationism advocated during that period; since stimulation of the frog spinal cord in different places produced different movements, it was thought that all motor impulses were localized in the spinal cord. However, fixed structure and localizationism were slowly broken down as the central dogma of neuroscience. It is now known that the primary motor cortex and premotor cortex at the highest level are responsible for most voluntary movements. Animal models, though, remain relevant in motor control and spinal cord reflexes and central pattern generators are still a topic of study. History: Bernstein Although Lashley (1933) first formulated the motor equivalence problem, it was Bernstein who articulated the DOF problem in its current form. In Bernstein's formulation, the problem results from infinite redundancy, yet flexibility between movements; thus, the nervous system apparently must choose a particular motor solution every time it acts. In Bernstein's formulation, a single muscle never acts in isolation. Rather, large numbers of "nervous centres" cooperate in order to make a whole movement possible. Nervous impulses from different parts of the CNS may converge on the periphery in combination to produce a movement; however, there is great difficulty for scientists in understanding and coordinating the facts linking impulses to a movement. Bernstein's rational understanding of movement and prediction of motor learning via what we now call "plasticity" was revolutionary for his time.In Bernstein's view, movements must always reflect what is contained in the "central impulse", in one way or another. However, he recognized that effectors (feed-forward) were not the only important component to movement; feedback was also necessary. Thus, Bernstein was one of the first to understand movement as a closed circle of interaction between the nervous system and the sensory environment, rather than a simple arc toward a goal. He defined motor coordination as a means for overcoming indeterminacy due to redundant peripheral DOFs. With increasing DOFs, it is increasingly necessary for the nervous system to have a more complex, delicate organizational control.Because humans are adapted to survive, the "most important" movements tend to be reflexes -- pain or defensive reflexes needed to be carried out in very short time scales in order for ancient humans to survive their harsh environment. Most of our movements, though, are voluntary; voluntary control had historically been under-emphasized or even disregarded altogether. Bernstein saw voluntary movements as structured around a "motor problem" where the nervous system needed two factors to act: a full and complete perception of reality, as accomplished by multisensory integration, and objectivity of perception through constant and correct recognition of signals by the nervous system. Only with both may the nervous system choose an appropriate motor solution. Difficulties: The DOF problem is still a topic of study because of the complexity of the neuromuscular system of the human body. Not only is the problem itself exceedingly difficult to tackle, but the vastness of the field of study makes synthesis of theories a challenge. Difficulties: Counting degrees of freedom One of the largest difficulties in motor control is quantifying the exact number of DOFs in the complex neuromuscular system of the human body. In addition to having redundant muscles and joints, muscles may span multiple joints, further complicating the system. Properties of muscle change as the muscle length itself changes, making mechanical models difficult to create and understand. Individual muscles are innervated by multiple nerve fibers (motor units), and the manner in which these units are recruited is similarly complex. While each joint is commonly understood as having an agonist-antagonist pair, not all joint movement is controlled locally. Finally, movement kinematics are not identical even when performing the same motion repeatedly; natural variation in position, velocity, and acceleration of the limb occur even during seemingly identical movements. Difficulties: Types of studies Another difficulty in motor control is unifying the different ways to study movements. Three distinct areas in studying motor control have emerged: limb mechanics, neurophysiology, and motor behavior. Difficulties: Limb mechanics Studies of limb mechanics focus on the peripheral motor system as a filter which converts patterns of muscle activation into purposeful movement. In this paradigm, the building block is a motor unit (a neuron and all the muscle fibers it innervates) and complex models are built to understand the multitude of biological factors influencing motion. These models become increasingly complicated when multiple joints or environmental factors such as ground reaction forces are introduced. Difficulties: Neurophysiology In neurophysiological studies, the motor system is modeled as a distributed, often hierarchical system with the spinal cord controlling the "most automatic" of movements such as stretch reflexes, and the cortex controlling the "most voluntary" actions such as reaching for an object, with the brainstem performing a function somewhere in between the two. Such studies seek to investigate how the primary motor cortex (M1) controls planning and execution of motor tasks. Traditionally, neurophysiological studies have used animal models with electrophysiological recordings and stimulation to better understand human motor control. Difficulties: Motor behavior Studies of motor behavior focus on the adaptive and feedback properties of the nervous system in motor control. The motor system has been shown to adapt to changes in its mechanical environment on relatively short timescales while simultaneously producing smooth movements; these studies investigate how this remarkable feedback takes place. Such studies investigate which variables the nervous system controls, which variables are less tightly controlled, and how this control is implemented. Common paradigms of study include voluntary reaching tasks and perturbations of standing balance in humans. Difficulties: Abundance or redundancy Finally, the very nature of the DOF problem poses questions. For example, does the nervous system really have difficulty in choosing from DOFs, or is the abundance of DOFs necessary for evolutionary survival? In very extreme movements, humans may exhaust the limits of their DOFs—in these cases, the nervous system only has one choice. Therefore, DOFs are not always infinite. Bernstein has suggested that our vast number of DOFs allows motor learning to take place, wherein the nervous system "explores" the set of possible motor solutions before settling on an optimal solution (learning to walk and ride a bike, for example). Finally, additional DOFs allow patients with brain or spinal cord injury to often retain movement while relying on a reduced set of biomechanical DOFs. Therefore, the "degrees of freedom problem" may be a misnomer and is better understood as the "motor equivalence problem" with redundant DOFs offering an evolutionary solution to this problem. Hypotheses and proposed solutions: There have been many attempts to offer solutions or conceptual models that explain the DOF problem. One of the first hypotheses was Fitts' Law, which states that a trade-off must occur between movement speed and movement accuracy in a reaching task. Since then, many other theories have been offered. Hypotheses and proposed solutions: Optimal control hypothesis A general paradigm for understanding motor control, optimal control has been defined as "optimizing motor control for a given aspect of task performance," or as a way to minimize a certain "cost" associated with a movement. This "cost function" may be different depending on the task-goal; for example, minimum energy expenditure might be a task-variable associated with locomotion, while precise trajectory and positional control could be a task-variable associated with reaching for an object. Furthermore, the cost function may be quite complex (for instance, it may be a functional instead of function) and be also related to the representations in the internal space. For example, the speech produced by biomechanical tongue models (BTM), controlled by the internal model which minimizes the length of the path traveled in the internal space under the constraints related to the executed task (e.g., quality of speech, stiffness of tongue), was found to be quite realistic. In essence, the goal of optimal control is to "reduce degrees of freedom in a principled way." Two key components of all optimal control systems are: a "state estimator" which tells the nervous system about what it is doing, including afferent sensory feedback and an efferent copy of the motor command; and adjustable feedback gains based on task goals. A component of these adjustable gains might be a "minimum intervention principle" where the nervous system only performs selective error correction rather than heavily modulating the entirety of a movement. Hypotheses and proposed solutions: Open and closed-loop models Both open-loop and closed-loop models of optimal control have been studied; the former generally ignores the role of sensory feedback, while the latter attempts to incorporate sensory feedback, which includes delays and uncertainty associated with the sensory systems involved in movement. Open-loop models are simpler but have severe limitations—they model a movement as prerecorded in the nervous system, ignoring sensory feedback, and also fail to model variability between movements with the same task-goal. In both models, the primary difficulty is identifying the cost associated with a movement. A mix of cost variables such as minimum energy expenditure and a "smoothness" function is the most likely choice for a common performance criterion. Hypotheses and proposed solutions: Learning and optimal control Bernstein suggested that as humans learn a movement, we first reduce our DOFs by stiffening the musculature in order to have tight control, then gradually "loosen up" and explore the available DOFs as the task becomes more comfortable, and from there find an optimal solution. In terms of optimal control, it has been postulated that the nervous system can learn to find task-specific variables through an optimal control search strategy. It has been shown that adaptation in a visuomotor reaching task becomes optimally tuned so that the cost of movement trajectories decreases over trials. These results suggest that the nervous system is capable of both nonadaptive and adaptive processes of optimal control. Furthermore, these and other results suggest that rather than being a control variable, consistent movement trajectories and velocity profiles are the natural outcome of an adaptive optimal control process. Hypotheses and proposed solutions: Limits of optimal control Optimal control is a way of understanding motor control and the motor equivalence problem, but as with most mathematical theories about the nervous system, it has limitations. The theory must have certain information provided before it can make a behavioral prediction: what the costs and rewards of a movement are, what the constraints on the task are, and how state estimation takes place. In essence, the difficulty with optimal control lies in understanding how the nervous system precisely executes a control strategy. Multiple operational time-scales complicate the process, including sensory delays, muscle fatigue, changing of the external environment, and cost-learning. Hypotheses and proposed solutions: Muscle synergy hypothesis In order to reduce the number of musculoskeletal DOFs upon which the nervous system must operate, it has been proposed that the nervous system controls muscle synergies, or groups of co-activated muscles, rather than individual muscles. Specifically, a muscle synergy has been defined as "a vector specifying a pattern of relative muscle activation; absolute activation of each synergy is thought to be modulated by a single neural command signal." Multiple muscles are contained within each synergy at fixed ratios of co-activation, and multiple synergies can contain the same muscle. It has been proposed that muscle synergies emerge from an interaction between constraints and properties of the nervous and musculoskeletal systems. This organization may require less computational effort for the nervous system than individual muscle control because fewer synergies are needed to explain a behavior than individual muscles. Furthermore, it has been proposed that synergies themselves may change as behaviors are learned and/or optimized. However, synergies may also be innate to some degree, as suggested by postural responses of humans at very young ages.A key point of the muscle synergy hypothesis is that synergies are low-dimensional and thus just a few synergies may account for a complex movement. Evidence for this structure comes from electromyographical (EMG) data in frogs, cats, and humans, where various mathematical methods such as principal components analysis and non-negative matrix factorization are used to "extract" synergies from muscle activation patterns. Similarities have been observed in synergy structure even across different tasks such as kicking, jumping, swimming and walking in frogs. Further evidence comes from stroke patients, who have been observed to use fewer synergies in certain tasks; some stroke patients used a comparable number of synergies as healthy subjects, but with reduced motor performance. These data suggest that a synergy formulation is robust and may lie at the lowest level of a hierarchical neural controller. Hypotheses and proposed solutions: Equilibrium point hypothesis and threshold control In the Equilibrium Point hypothesis, all movements are generated by the nervous system through a gradual transition of equilibrium points along a desired trajectory. "Equilibrium point" in this sense is taken to mean a state where a field has zero force, meaning opposing muscles are in a state of balance with each other, like two rubber bands pulling the joint to a stable position. Equilibrium point control is also called "threshold control" because signals sent from the CNS to the periphery are thought to modulate the threshold length of each muscle. In this theory, motor neurons send commands to muscles, which changes the force–length relation within a muscle, resulting in a shift of the system's equilibrium point. The nervous system would not need to directly estimate limb dynamics, but rather muscles and spinal reflexes would provide all the necessary information about the system's state. The equilibrium-point hypothesis is also reported to be well suited for the design of biomechanical robots controlled by appropriated internal models. Hypotheses and proposed solutions: Force control and internal models The force control hypothesis states that the nervous system uses calculation and direct specification of forces to determine movement trajectories and reduce DOFs. In this theory, the nervous system must form internal models—a representation of the body's dynamics in terms of the surrounding environment. A nervous system which controls force must generate torques based on predicted kinematics, a process called inverse dynamics. Both feed-forward (predictive) and feedback models of motion in the nervous system may play a role in this process. Hypotheses and proposed solutions: Uncontrolled manifold (UCM) hypothesis It has been noted that the nervous system controls particular variables relevant to performance of a task, while leaving other variables free to vary; this is called the uncontrolled manifold hypothesis (UCM). The uncontrolled manifold is defined as the set of variables not affecting task performance; variables perpendicular to this set in Jacobian space are considered controlled variables (CM). For example, during a sit-to-stand task, head and center-of-mass position in the horizontal plane are more tightly controlled than other variables such as hand motion. Another study indicates that the quality of tongue's movements produced by bio-robots, which are controlled by a specially designed internal model, is practically uncorrelated with the stiffness of the tongue; in other words, during the speech production the relevant parameter is the quality of speech, while the stiffness is rather irrelevant. At the same time, the strict prescription of the stiffness' level to the tongue's body affects the speech production and creates some variability, which is however, not significant for the quality of speech (at least, in the reasonable range of stiffness' levels). UCM theory makes sense in terms of Bernstein's original theory because it constrains the nervous system to only controlling variables relevant to task performance, rather than controlling individual muscles or joints. Unifying theories: Not all theories about the selection of movement are mutually exclusive. Necessarily, they all involve reduction or elimination of redundant DOFs. Optimal feedback control is related to UCM theory in the sense that the optimal control law may not act along certain dimensions (the UCM) of lesser importance to the nervous system. Furthermore, this lack of control in certain directions implies that controlled variables will be more tightly correlated; this correlation is seen in the low-dimensionality of muscle synergies. Furthermore, most of these theories incorporate some sort of feedback and feed-forward models that the nervous system must utilize. Most of these theories also incorporate some sort of hierarchical neural control scheme, usually with cortical areas at the top and peripheral outputs at the lowest level. However, none of the theories is perfect; the DOF problem will continue to be relevant as long as the nervous system is imperfectly understood.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**President (video game)** President (video game): President is a 1987 game released by Kevin Toms for the Amstrad CPC, Commodore 64 and ZX Spectrum. Gameplay: Following on from Toms' Football Manager and Software Star games, President is a game where the player takes control of a small country, and decides whether to be a dictator, or a hero. The player has to balance the wants and needs of their virtual citizens, while also balancing the books and trying to build up an army and search for oil. Reception: Your Sinclair gave the game a positive review, awarding it 7/10. Similarly, Sinclair User gave the game 4/5. However, Crash were less favourable, only awarding it 29%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Travel guitar** Travel guitar: Travel guitars are small guitars with a full or nearly full scale-length. In contrast, a reduced scale-length is typical for guitars intended for children, which have scale-lengths of one-quarter (ukulele guitar, or guitalele), one-half, and three-quarter. Examples: Examples of travel guitars include the following: C. F. MartinModel: Backpacker.A very small guitar with a body shaped like an elongated triangle, similar in shape to certain types of psaltery, and designed to be very portable and inexpensive while still being constructed of quality woods. The guitar is famous for having originally been designed by Robert McAnally before Martin took over the design, and was the first guitar to be taken into space. The guitar has also been taken up Mount EverestModel: Little MartinTaylorModel: Baby Taylor
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pot (poker)** Pot (poker): The pot in poker refers to the sum of money that players wager during a single hand or game, according to the betting rules of the variant being played. It is likely that the word pot is related to or derived from the word jackpot. Pot (poker): At the conclusion of a hand, either by all but one player folding, or by showdown, the pot is won or shared by the player or players holding the winning cards. Sometimes a pot can be split between many players. This is particularly true in high-low games where not only the highest hand can win, but under appropriate conditions, the lowest hand will win a share of the pot. Pot (poker): See "all in" for more information about side pots.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equivalent narcotic depth** Equivalent narcotic depth: Equivalent narcotic depth (END) (historically also equivalent nitrogen depth) is used in technical diving as a way of estimating the narcotic effect of a breathing gas mixture, such as nitrox, heliox or trimix. The method is used, for a given breathing gas mix and dive depth, to calculate the equivalent depth which would produce about the same narcotic effect when breathing air.The equivalent narcotic depth of a breathing gas mix at a particular depth is calculated by finding the depth at which breathing air would have the same total partial pressure of narcotic components as the breathing gas in question.Since air is composed of approximately 21% oxygen and 79% nitrogen, it makes a difference whether oxygen is considered narcotic, and how narcotic it is considered relative to nitrogen. If oxygen is considered to be equally narcotic to nitrogen, the narcotic gases make up 100% of the mix, or equivalently the fraction of the total gases which are narcotic is 1.0. Oxygen is assumed equivalent in narcotic effect to nitrogen for this purpose by some authorities and certification agencies. In contrast, other authorities and agencies consider oxygen to be non-narcotic, and group it with helium and other potential non-narcotic components, or less narcotic, and group it with gases like hydrogen, which has a narcotic effect estimated at about 55% of nitrogen based on lipid solubility.Research continues into the nature and mechanism of inert gas narcosis, and for objective methods of measurement for comparison of the severity at different depths and different gas compositions. Oxygen narcosis: Although oxygen has greater lipid solubility than nitrogen and therefore should be more narcotic according to the Meyer-Overton correlation, it is likely that some of the oxygen is metabolised, thus reducing its effect to a level similar to that of nitrogen or less.There are also known exceptions to the Meyer-Overton correlation. Some gases that should be very narcotic based on their high solubility in oil, are much less narcotic than predicted. Anesthetic research has shown that for a gas to be narcotic, its molecule must bind to receptors on the neurons, and some molecules have a shape that is not conducive to such binding. It is unknown if and how oxygen binds to neuronal receptors, so the measurable fact that oxygen is more oil-soluble than nitrogen, does not necessarily mean it is more narcotic than nitrogen.Since there is some evidence that oxygen plays a part in the narcotic effects of a gas mixture, some organisations prefer assuming that it is narcotic to the previous method of considering only the nitrogen component as narcotic, since this assumption is more conservative, and the NOAA diving manual recommends treating oxygen and nitrogen as equally narcotic as a way to simplify calculations, given that no measured value is available.The situation is further complicated by the effects of inert gas narcosis being significantly variable between divers using the same gas mixture, and between occasions for the same diver on the same gas and dive profile. Oxygen narcosis: Objective testing has failed to demonstrate oxygen narcosis, and research continues. There has been difficulty in identifying a reliable method of objectively measuring gas narcosis, but quantitative electroencephalography (EEG) has produced interesting results. Quantification of the more subtle effects of inert gas narcosis is difficult. Psychometric tests can be variable and affected by learning effects, and participant motivation. In principle, objective neurophysiological measurements like quantitative electroencephalogram (qEEG) analysis and the critical flicker fusion frequency (CFFF) could be used to get objective measurements.Some studies have shown a decrease in CFFF during air-breathing dives at 4 bar (30 msw), but have not detected a change with partial pressure of pure oxygen within the breathable range. The results with CFFF for nitrogen do not scale well with partial pressure at greater depths.Hyperbaric inert gas narcosis is associated with depressed brain activity when measured with an EEG. A functional connectivity metric based on the so-called mutual information analysis has been developed, and summarized using the global efficiency network measure. This method has successfully differentiated between breathing air at the surface and air at 50 m, and even showed an effect at 18 m on air, but did not show a difference associated with pressure for heliox exposures. The lack of change with heliox suggests that the effect of hyperbaric nitrogen is measured, and not a direct pressure effect.The EEG functional connectivity metric did not change while breathing hyperbaric oxygen within the safe range for testing, which indicates that oxygen does not produce the same changes in brain electrical activity associated with high partial pressures of nitrogen, which suggests that oxygen is not narcotic in the same way as nitrogen. Carbon dioxide narcosis: Although carbon dioxide (CO2) is known to be more narcotic than nitrogen – a rise in end-tidal alveolar partial pressure of CO2 of 10 millimetres of mercury (13 mbar) caused an impairment of both mental and psychomotor functions of approximately 10% – the effects of carbon dioxide retention are not considered in these calculations, as the concentration of CO2 in the supplied breathing gas is normally low, and the alveolar concentration is mostly affected by diver exertion and ventilation issues, and indirectly by work of breathing due to equipment and gas density effects.The driving mechanism of CO2 narcosis in divers is acute hypercapnia. The potential causes can be split into four groups: insufficient ventilation, excessive dead space, increased metabolic carbon dioxide production, and high carbon dioxide content of the breathing gas, usually only a problem with rebreathers. Other components of the breathing gas mixture: It is generally accepted as of 2023, that helium has no known narcotic effect at any depth at which gas can be breathed, and can be disregarded as a contributor to inert gas narcosis. Other gases which may be considered include hydrogen and neon. Standards: The standards recommended by the recreational certification agencies are basically arbitrary, as the actual effects of breathing gas narcosis are poorly understood, and the effects quite variable between individual divers. Some standards are more conservative than others, and in almost all cases it is the responsibility of the individual diver to make the choice and accept the consequence of their decision, except during training programs where standards can be enforced if the agency chooses to do so. One agency, GUE, prescribes the gas mixtures their members are allowed to use, but even that requirement and membership of the organisation is ultimately the choice of the diver. Professional divers may be legally obliged to comply with the codes of practice under which they work, and contractually obliged to follow the requirements of the operations manual of their employer, in terms of occupational health and safety legislation. Standards: Some training agencies, such as CMAS, GUE, and PADI and include oxygen as equivalent to nitrogen in their equivalent narcotic depth (END) calculations. PSAI considers oxygen narcotic but less so than nitrogen. Others like BSAC, IANTD, NAUI and TDI do not consider oxygen narcotic. Calculations: In diving calculations it is assumed unless otherwise stipulated that the atmospheric pressure is 1 bar or 1 atm. and that the diving medium is water. The ambient pressure at depth is the sum of the hydrostatic pressure due to depth and the atmospheric pressure on the surface. Some early (1978) experimental results results suggest that, at raised partial pressures, nitrogen, oxygen and carcon dioxide have narcotic properties, and that the mechanism of CO2 narcosis differs fundamentally from that of N2 and O2 narcosis, and more recent work suggests a significant difference between N2 an O2 mechanisms. Other components of breathing gases for diving may include hydrogen, neon, and argon, all of which are known or thought to be narcotic to some extent. The formula can be extended to include these gases if desired. The argon normally found in air at about 1% by volume is assumed to be present in the nitrogen component in the same ratio to nitrogen as in air, which simplifies calculation. Calculations: Since in the absence of conclusive evidence, oxygen may or may not be considered narcotic, there are two ways to calculate END depending on which opinion is followed. Calculations: Oxygen considered narcotic Since for these calculations oxygen is usually assumed to be equally narcotic to nitrogen, the ratio considered is of the sum of nitrogen and oxygen in the breathing gas and in air, where air is approximated as entirely consisting of narcotic gas. In this system all nitrox mixtures are assumed to be narcotically indistinguishable from air. The other common calculation assumes that oxygen is not narcotic and is multiplied by a relative narcotic value of 0 on both sides of the equation. Calculations: Metres The partial pressure in bar, of a component gas in a mixture at a particular depth in metres is given by: fraction of gas × (depth/10 + 1)So the equivalent narcotic depth can be calculated as follows: partial pressure of narcotic gases in air at END = partial pressure of narcotic gases in trimix at a given depth.or (fraction of O2 x (relative narcotic strength) + fraction of N2 x 1) in air × (END/10 + 1) = (fraction of O2 x (relative narcotic strength) + fraction of N2 x 1) in trimix × (depth/10 +1)which gives for oxygen deemed equal in narcotic strength to nitrogen: 1.0 × (END/10 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/10 +1)resulting in: END = (depth + 10) × (fraction of O2 + fraction of N2) in trimix − 10Since (fraction of O2 + fraction of N2) in a trimix = (1 − fraction of helium), the following formula is equivalent: Working the earlier example, for a gas mix containing 40% helium being used at 60 metres, the END is: END = (60 + 10) × (1 − 0.4) − 10 END = 70 × 0.6 − 10 END = 42 − 10 END = 32 metresSo at 60 metres on this mix, the diver would feel approximately the same narcotic effect as a dive on air to 32 metres. Calculations: Feet The partial pressure of a gas in a mixture at a particular depth in feet is given by: fraction of gas × (depth/33 + 1)So the equivalent narcotic depth can be calculated as follows: partial pressure of narcotic gases in air at END = partial pressure of narcotic gases in trimix at a given depth.or (fraction of O2 + fraction of N2) in air × (END/33 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/33 +1)which gives: 1.0 × (END/33 + 1) = (fraction of O2 + fraction of N2) in trimix × (depth/33 +1)resulting in: END = (depth + 33) × (fraction of O2 + fraction of N2) in trimix − 33Since (fraction of O2 + fraction of N2) in a trimix = (1 − fraction of helium), the following formula is equivalent: As an example, for a gas mix containing 40% helium being used at 200 feet, the END is: END = (200 + 33) × (1 − 0.4) − 33 END = 233 × 0.6 − 33 END = 140 − 33 END = 107 feetSo at 200 feet on this mix, the diver would feel the same narcotic effect as a dive on air to 107 feet. Calculations: Oxygen not considered equally narcotic to nitrogen The ratio of nitrogen between the gas mixture and air is considered. Oxygen may be factored in at a narcotic ratio chosen by the user, or assumed to be negligible. In this system nitrox mixtures are not considered equivalent to air.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Decoy (chess)** Decoy (chess): In chess, a decoy is a tactic that lures an enemy man off its square and away from its defensive role. Typically this means away from a square on which it defends another piece or threat. The tactic is also called a deflection. Usually the piece is decoyed to a particular square via the sacrifice of a piece on that square. A piece so sacrificed is called a decoy. When the piece decoyed or deflected is the king, the tactic is known as attraction. In general in the middlegame, the sacrifice of a decoy piece is called a diversionary sacrifice. Examples: The game Honfi–Barczay, Kecskemet 1977, with Black to play, illustrates two separate decoys. First, the white queen is set up on c4 for a knight fork: 1... Rxc4! 2. Qxc4Next, the fork is executed by removing the sole defender of the a3-square: 2... Qxb2!+ 3. Rxb2 Na3+ 4. Kc1Finally, a zwischenzug decoys (attracts) the king to b2: 4... Bxb2+After either 5.Kxb2 Nxc4+ 6.Kc3 Rxe4, or 5.Kd1 Nxc4, Black is two pawns ahead and should win comfortably. Examples: In this position, after the moves 1.Rf8+ Kxf8 (forced) 2.Nd7+ Ke7 3.Nxb6, White wins the queen and the game. A similar, but more complex position is described by Huczek. Examples: In the diagrammed position from Vidmar–Euwe, Carlsbad 1929, Black had just played 33...Qf4, threatening mate on h2. White now uncorks the elegant combination 34.Re8+ Bf8 (forced) 35.Rxf8+ (attraction) Kxf8 (forced) 36.Nf5+ (discovered check) Kg8 (36...Ke8 37.Qe7#) 37.Qf8+ (attraction) 1–0 Black resigns. (If 37...Kxf8 then 38.Rd8#. If 37...Kh7 then 38.Qg7#.) The combination after 33...Qf4 features two separate examples of the attraction motif. Examples: This example shows a position from the game Dementiev–Dzindzichashvili, URS 1972. White had just played 61.g6 (with the threat 62.Qh7+ Kf8 63.Rxf5+). However, Black continued with the crushing 61...Rh1+ (attraction) 62. Kxh1 (best) Nxg3+ (the white rook is pinned) 63.Kh2 Nxh5 and White has dropped his queen to the knight fork. In the game, White resigned after 61...Rh1+. Examples: Perhaps the most celebrated game featuring a decoy theme is Petrosian–Pachman, Bled 1961, which also involved a queen sacrifice. Pachman resigned after 19.Qxf6+ (attraction) Kxf6 20.Be5+ Kg5 21.Bg7! setting a mating net. In the game Menchik–Graf, Semmering 1937, Graf resigned after 21.Rd7, deflecting Black's queen. (If 21...Qxd7, then 22.Qxh5 with mate to follow; 21.Qxh5 immediately wins only a pawn after 21...Qxh2+.) Often a wing pawn serves as a decoy in endgames. In the game Ivkov–Taimanov, Belgrade 1956, Black resigned in the position shown because White has an easy win by using his passed a2-pawn as a decoy to lure Black's king away from the center and to the queenside, allowing easy promotion of the h6-pawn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Western Disturbance** Western Disturbance: A western disturbance is an extratropical storm originating in the Mediterranean region that brings sudden winter rain to the northern parts of the Indian subcontinent, which extends as east as up to northern parts of Bangladesh and South eastern Nepal. It is a non-monsoonal precipitation pattern driven by the westerlies. The moisture in these storms usually originates over the Mediterranean Sea, the Caspian Sea and the Black Sea. Extratropical storms are a global phenomena with moisture usually carried in the upper atmosphere, unlike their tropical counterparts where the moisture is carried in the lower atmosphere. In the case of the Indian subcontinent, moisture is sometimes shed as rain when the storm system encounters the Himalayas. Western disturbances are more frequent and stronger in the winter season.Western disturbances are important for the development of the Rabi crop, which includes the locally important staple wheat. Formation: Western disturbances originate in the Mediterranean region. A high-pressure area over Ukraine and neighbourhood consolidates, causing the intrusion of cold air from polar regions towards an area of relatively warmer air with high moisture. This generates favorable conditions for cyclogenesis in the upper atmosphere, which promotes the formation of an eastward-moving extratropical depression. Traveling at speeds up to 12 m/s (43 km/h; 27 mph), the disturbance moves towards the Indian subcontinent until the Himalayas inhibits its development, upon which the depression rapidly weakens. The western disturbances are embedded in the mid-latitude subtropical westerly jet stream. Significance and impact: Western disturbances, specifically the ones in winter, bring moderate to heavy rain in low-lying areas and heavy snow to mountainous areas of the Indian Subcontinent. They are the cause of most winter and post-monsoon season rainfall across northwest India. Precipitation during the winter season has great importance in agriculture, particularly for the rabi crops. Wheat among them is one of the most important crops, which helps to meet India's food security. An average of four to five western disturbances form during the winter season. The rainfall distribution and amount varies with every western disturbance. Significance and impact: Western disturbances are usually associated with cloudy sky, higher night temperatures and unusual rain. Excessive precipitation due to western disturbances can cause crop damage, landslides, floods and avalanches. Over the Indo-Gangetic plains, they occasionally bring cold wave conditions and dense fog. These conditions remain stable until disturbed by another western disturbance. When western disturbances move across northwest India before the onset of monsoon, a temporary advancement of monsoon current appears over the region. Significance and impact: The strongest western disturbances usually occur in the northern parts of Pakistan, where flooding is reported number of times during the winter season. Effects on monsoon: Western disturbances start declining in numbers after winter. During the summer months of April and May, they move across north India. The southwest monsoon current generally progresses from east to west in the northern Himalayan region, unlike western disturbances which follow a west to east trend in north India with consequent rise in pressure carrying cold pool of air. This helps in the activation of monsoon in certain parts of northwest India. It also causes pre-monsoon rainfall especially in northern India.The interaction of the monsoon trough with western disturbances may occasionally cause dense clouding and heavy precipitation. The 2013 North India floods, which killed more than 5000 people in a span of 3 days, is said to be a result of one such interaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Relcovaptan** Relcovaptan: Relcovaptan (SR-49059) is a non-peptide vasopressin receptor antagonist, selective for the V1a subtype. It has shown positive initial results for the treatment of Raynaud's disease and dysmenorrhoea, and as a tocolytic, although it is not yet approved for clinical use.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGC 89** NGC 89: NGC 89 is a barred spiral or lenticular galaxy, part of Robert's Quartet, a group of four interacting galaxies. This member has a Seyfert 2 nucleus with extra-planar features emitting H-alpha radiation. There are filamentary features on each side of the disk, including a jet-like structure extending about 4 kpc in the NE direction. It may have lost its neutral hydrogen (H1) gas due to interactions with the other members of the clusters—most likely NGC 92.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Banach measure** Banach measure: In the mathematical discipline of measure theory, a Banach measure is a certain type of content used to formalize geometric area in problems vulnerable to the axiom of choice. Traditionally, intuitive notions of area are formalized as a classical, countably additive measure. This has the unfortunate effect of leaving some sets with no well-defined area; a consequence is that some geometric transformations do not leave area invariant, the substance of the Banach–Tarski paradox. A Banach measure is a type of generalized measure to elide this problem. A Banach measure on a set Ω is a finite, finitely additive measure μ ≠ 0, defined for every subset of ℘(Ω), and whose value is 0 on finite subsets. Banach measure: A Banach measure on Ω which takes values in {0, 1} is called an Ulam measure on Ω. As Vitali's paradox shows, Banach measures cannot be strengthened to countably additive ones. Banach measure: Stefan Banach showed that it is possible to define a Banach measure for the Euclidean plane, consistent with the usual Lebesgue measure. This means that every Lebesgue-measurable subset of R2 is also Banach-measurable, implying that both measures are equal.The existence of this measure proves the impossibility of a Banach–Tarski paradox in two dimensions: it is not possible to decompose a two-dimensional set of finite Lebesgue measure into finitely many sets that can be reassembled into a set with a different measure, because this would violate the properties of the Banach measure that extends the Lebesgue measure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acefylline** Acefylline: Acefylline (INN), also known as 7-theophyllineacetic acid, is a stimulant drug of the xanthine chemical class. It acts as an adenosine receptor antagonist. It is combined with diphenhydramine in the pharmaceutical preparation etanautine to help offset diphenhydramine induced drowsiness.A silanol–mannuronic acid conjugate of acefylline, acefylline methylsilanol mannuronate (INCI; trade name Xantalgosil C) is marketed as a lipolytic phosphodiesterase inhibitor. It is used as an ingredient in cosmeceuticals for the treatment of cellulite and as a skin conditioner.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differential algebraic group** Differential algebraic group: In mathematics, a differential algebraic group is a differential algebraic variety with a compatible group structure. Differential algebraic groups were introduced by Cassidy (1972).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biotextile** Biotextile: Biotextiles are structures composed of textile fibers designed for use in specific biological environments where their performance depends on biocompatibility and biostability with cells and biological fluids. Biotextiles include implantable devices such as surgical sutures, hernia repair fabrics, arterial grafts, artificial skin and parts of artificial hearts. Biotextile: They were first created 30 years ago by Dr. Martin W. King, a professor in North Carolina State University’s College of Textiles.Medical textiles are a broader group which also includes bandages, wound dressings, hospital linen, preventive clothing etc. Antiseptic biotextiles are textiles used in fighting against cutaneous bacterial proliferation. Zeolite and triclosan are at the present time the most used molecules. This original property allows to fightinhibits the development of odours or bacterial proliferation in the diabetic foot. New developments: In the new paradigm of tissue engineering, professionals are trying to develop new textiles so that the body can form new tissue around these devices so it’s not relying solely on synthetic foreign implanted material. Graduate student Jessica Gluck has demonstrated that viable and functioning liver cells can be grown on textile scaffolds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DIY Kindle Scanner** DIY Kindle Scanner: The DIY Kindle Scanner, or Do It Yourself Kindle Scanner, is a robotic device made from Lego Mindstorms which was designed and built by Peter Purgathofer from 2012 to 2013. The robot interfaces with Purgathofer's personal computer and a Kindle to make a copy of the Kindle e-book. This robot in effect bypasses the digital rights management system set in place to protect Kindle e-books. Background: Peter Purgathofer is an associate professor at the Vienna University of Technology in Austria. Background: When he released a video on Vimeo documenting the operation of the device, Purgathofer wrote that the project was meant to be an artistic reflection connecting the ideas of “book scanning, copyright, and digital rights management.” In a reply to an email, Purgathofer stated that the project was not meant to be a negative reaction against Kindle e-books, but rather a way to use both Lego Mindstorms and the Kindle in a way that neither was usually intended to be used. Operation: The robot is first set up so that it can operate the computer as well as hold the Kindle. The image capture software must already be running on the computer and the Kindle must be open to the first page of the book to be scanned into the computer. The robot then runs through a loop where it hits the spacebar to activate the camera on the computer and then uses finger-like robotic appendages to turn to the next page on the Kindle. This loop is then repeated until all pages have been scanned into the computer. Optical character recognition (OCR) software is then used to convert the scanned images into a duplicate of the original Kindle e-book in a plain text file. Reaction: Several critics have recognized that more direct means of bypassing digital rights management are available. In this context, the DIY Kindle Scanner has been labeled as a type of Rube Goldberg machine.Additionally, Cory Doctorow made the claim that the project was in fact a legal means of bypassing digital rights management. This claim has been supported with the argument that the DIY Kindle Scanner simply exploits the analog hole which is applicable to all digital rights management systems.In light of the question of the legality of this project, Purgathofer has scanned only one e-book with this method and he explains that he has not shared the copy with anyone because he is worried that "It would get me in deep trouble."Furthermore, Purgathofer states that this project should not be associated with his academic work. In explanation, he said, "It’s a private project." General References: "DIY Kindle Scanner", Post-Digital Publishing Archive. Retrieved October 6, 2015. Hoffelder, Nate. "Kindle Plus Legos Plus Mac Equals DIY Scanner (video)", The Digital Reader. Retrieved October 27, 2015. Love, Dylan. "This Lego Robot Can Outwit Amazon's Kindle And Make Copies Of Your E-Books", Business Insider. Retrieved October 6, 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monosaccharide** Monosaccharide: Monosaccharides (from Greek monos: single, sacchar: sugar), also called simple sugars, are the simplest forms of sugar and the most basic units (monomers) from which all carbohydrates are built.They are usually colorless, water-soluble, and crystalline solids. Contrary to their name (sugars), only some monosaccharides have a sweet taste. Most monosaccharides have the formula (CH2O) (though not all molecules with this formula are monosaccharides). Monosaccharide: Examples of monosaccharides include glucose (dextrose), fructose (levulose), and galactose. Monosaccharides are the building blocks of disaccharides (such as sucrose and lactose) and polysaccharides (such as cellulose and starch). The table sugar used in everyday vernacular is itself a disaccharide sucrose comprising one molecule of each of the two monosaccharides D-glucose and D-fructose.Each carbon atom that supports a hydroxyl group is chiral, except those at the end of the chain. This gives rise to a number of isomeric forms, all with the same chemical formula. For instance, galactose and glucose are both aldohexoses, but have different physical structures and chemical properties. Monosaccharide: The monosaccharide glucose plays a pivotal role in metabolism, where the chemical energy is extracted through glycolysis and the citric acid cycle to provide energy to living organisms. Structure and nomenclature: With few exceptions (e.g., deoxyribose), monosaccharides have this chemical formula: (CH2O)x, where conventionally x ≥ 3. Monosaccharides can be classified by the number x of carbon atoms they contain: triose (3), tetrose (4), pentose (5), hexose (6), heptose (7), and so on. Structure and nomenclature: Glucose, used as an energy source and for the synthesis of starch, glycogen and cellulose, is a hexose. Ribose and deoxyribose (in RNA and DNA, respectively) are pentose sugars. Examples of heptoses include the ketoses, mannoheptulose and sedoheptulose. Monosaccharides with eight or more carbons are rarely observed as they are quite unstable. In aqueous solutions monosaccharides exist as rings if they have more than four carbons. Structure and nomenclature: Linear-chain monosaccharides Simple monosaccharides have a linear and unbranched carbon skeleton with one carbonyl (C=O) functional group, and one hydroxyl (OH) group on each of the remaining carbon atoms. Therefore, the molecular structure of a simple monosaccharide can be written as H(CHOH)n(C=O)(CHOH)mH, where n + 1 + m = x; so that its elemental formula is CxH2xOx. By convention, the carbon atoms are numbered from 1 to x along the backbone, starting from the end that is closest to the C=O group. Monosaccharides are the simplest units of carbohydrates and the simplest form of sugar. Structure and nomenclature: If the carbonyl is at position 1 (that is, n or m is zero), the molecule begins with a formyl group H(C=O)− and is technically an aldehyde. In that case, the compound is termed an aldose. Otherwise, the molecule has a ketone group, a carbonyl −(C=O)− between two carbons; then it is formally a ketone, and is termed a ketose. Ketoses of biological interest usually have the carbonyl at position 2. Structure and nomenclature: The various classifications above can be combined, resulting in names such as "aldohexose" and "ketotriose". Structure and nomenclature: A more general nomenclature for open-chain monosaccharides combines a Greek prefix to indicate the number of carbons (tri-, tetr-, pent-, hex-, etc.) with the suffixes "-ose" for aldoses and "-ulose" for ketoses. In the latter case, if the carbonyl is not at position 2, its position is then indicated by a numeric infix. So, for example, H(C=O)(CHOH)4H is pentose, H(CHOH)(C=O)(CHOH)3H is pentulose, and H(CHOH)2(C=O)(CHOH)2H is pent-3-ulose. Structure and nomenclature: Open-chain stereoisomers Two monosaccharides with equivalent molecular graphs (same chain length and same carbonyl position) may still be distinct stereoisomers, whose molecules differ in spatial orientation. This happens only if the molecule contains a stereogenic center, specifically a carbon atom that is chiral (connected to four distinct molecular sub-structures). Those four bonds can have any of two configurations in space distinguished by their handedness. In a simple open-chain monosaccharide, every carbon is chiral except the first and the last atoms of the chain, and (in ketoses) the carbon with the keto group. Structure and nomenclature: For example, the triketose H(CHOH)(C=O)(CHOH)H (glycerone, dihydroxyacetone) has no stereogenic center, and therefore exists as a single stereoisomer. The other triose, the aldose H(C=O)(CHOH)2H (glyceraldehyde), has one chiral carbon—the central one, number 2—which is bonded to groups −H, −OH, −C(OH)H2, and −(C=O)H. Therefore, it exists as two stereoisomers whose molecules are mirror images of each other (like a left and a right glove). Monosaccharides with four or more carbons may contain multiple chiral carbons, so they typically have more than two stereoisomers. The number of distinct stereoisomers with the same diagram is bounded by 2c, where c is the total number of chiral carbons. Structure and nomenclature: The Fischer projection is a systematic way of drawing the skeletal formula of an acyclic monosaccharide so that the handedness of each chiral carbon is well specified. Each stereoisomer of a simple open-chain monosaccharide can be identified by the positions (right or left) in the Fischer diagram of the chiral hydroxyls (the hydroxyls attached to the chiral carbons). Most stereoisomers are themselves chiral (distinct from their mirror images). In the Fischer projection, two mirror-image isomers differ by having the positions of all chiral hydroxyls reversed right-to-left. Mirror-image isomers are chemically identical in non-chiral environments, but usually have very different biochemical properties and occurrences in nature. Structure and nomenclature: While most stereoisomers can be arranged in pairs of mirror-image forms, there are some non-chiral stereoisomers that are identical to their mirror images, in spite of having chiral centers. This happens whenever the molecular graph is symmetrical, as in the 3-ketopentoses H(CHOH)2(CO)(CHOH)2H, and the two halves are mirror images of each other. In that case, mirroring is equivalent to a half-turn rotation. For this reason, there are only three distinct 3-ketopentose stereoisomers, even though the molecule has two chiral carbons. Structure and nomenclature: Distinct stereoisomers that are not mirror-images of each other usually have different chemical properties, even in non-chiral environments. Therefore, each mirror pair and each non-chiral stereoisomer may be given a specific monosaccharide name. For example, there are 16 distinct aldohexose stereoisomers, but the name "glucose" means a specific pair of mirror-image aldohexoses. In the Fischer projection, one of the two glucose isomers has the hydroxyl at left on C3, and at right on C4 and C5; while the other isomer has the reversed pattern. These specific monosaccharide names have conventional three-letter abbreviations, like "Glu" for glucose and "Thr" for threose. Structure and nomenclature: Generally, a monosaccharide with n asymmetrical carbons has 2n stereoisomers. The number of open chain stereoisomers for an aldose monosaccharide is larger by one than that of a ketose monosaccharide of the same length. Every ketose will have 2(n−3) stereoisomers where n > 2 is the number of carbons. Every aldose will have 2(n−2) stereoisomers where n > 2 is the number of carbons. Structure and nomenclature: These are also referred to as epimers which have the different arrangement of −OH and −H groups at the asymmetric or chiral carbon atoms (this does not apply to those carbons having the carbonyl functional group). Structure and nomenclature: Configuration of monosaccharides Like many chiral molecules, the two stereoisomers of glyceraldehyde will gradually rotate the polarization direction of linearly polarized light as it passes through it, even in solution. The two stereoisomers are identified with the prefixes D- and L-, according to the sense of rotation: D-glyceraldehyde is dextrorotatory (rotates the polarization axis clockwise), while L-glyceraldehyde is levorotatory (rotates it counterclockwise). Structure and nomenclature: The D- and L- prefixes are also used with other monosaccharides, to distinguish two particular stereoisomers that are mirror-images of each other. For this purpose, one considers the chiral carbon that is furthest removed from the C=O group. Its four bonds must connect to −H, −OH, −C(OH)H, and the rest of the molecule. If the molecule can be rotated in space so that the directions of those four groups match those of the analog groups in D-glyceraldehyde's C2, then the isomer receives the D- prefix. Otherwise, it receives the L- prefix. Structure and nomenclature: In the Fischer projection, the D- and L- prefixes specifies the configuration at the carbon atom that is second from bottom: D- if the hydroxyl is on the right side, and L- if it is on the left side. Note that the D- and L- prefixes do not indicate the direction of rotation of polarized light, which is a combined effect of the arrangement at all chiral centers. However, the two enantiomers will always rotate the light in opposite directions, by the same amount. See also D/L system. Structure and nomenclature: Cyclisation of monosaccharides (hemiacetal formation) A monosaccharide often switches from the acyclic (open-chain) form to a cyclic form, through a nucleophilic addition reaction between the carbonyl group and one of the hydroxyl groups of the same molecule. The reaction creates a ring of carbon atoms closed by one bridging oxygen atom. The resulting molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. The reaction is easily reversed, yielding the original open-chain form. Structure and nomenclature: In these cyclic forms, the ring usually has five or six atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the aldehyde group on carbon 1 and the hydroxyl on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a seven-atom ring (the same of oxepane), rarely encountered, are called heptoses. Structure and nomenclature: For many monosaccharides (including glucose), the cyclic forms predominate, in the solid state and in solutions, and therefore the same name commonly is used for the open- and closed-chain isomers. Thus, for example, the term "glucose" may signify glucofuranose, glucopyranose, the open-chain form, or a mixture of the three. Structure and nomenclature: Cyclization creates a new stereogenic center at the carbonyl-bearing carbon. The −OH group that replaces the carbonyl's oxygen may end up in two distinct positions relative to the ring's midplane. Thus each open-chain monosaccharide yields two cyclic isomers (anomers), denoted by the prefixes α- and β-. The molecule can change between these two forms by a process called mutarotation, that consists in a reversal of the ring-forming reaction followed by another ring formation. Structure and nomenclature: Haworth projection The stereochemical structure of a cyclic monosaccharide can be represented in a Haworth projection. In this diagram, the α-isomer for the pyranose form of a D-aldohexose has the −OH of the anomeric carbon below the plane of the carbon atoms, while the β-isomer has the −OH of the anomeric carbon above the plane. Pyranoses typically adopt a chair conformation, similar to that of cyclohexane. In this conformation, the α-isomer has the −OH of the anomeric carbon in an axial position, whereas the β-isomer has the −OH of the anomeric carbon in equatorial position (considering D-aldohexose sugars). Derivatives: A large number of biologically important modified monosaccharides exist: Amino sugars such as: galactosamine glucosamine sialic acid N-acetylglucosamine Sulfosugars such as: sulfoquinovose Others such as: ascorbic acid mannitol glucuronic acid
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boron porphyrins** Boron porphyrins: Boron porphyrins are a variety of porphyrin, a common macrocycle used for photosensitization and metal trapping applications, that incorporate boron. The central four nitrogen atoms in a porphyrin macrocycle form a unique molecular pocket which is known to accommodate transition metals of various sizes and oxidation states. Due to the diversity of binding modes available to porphyrin, there is a growing interest in introducing other elements (i.e. main group elements) into this pocket. Boron porphyrins: Boron in particular has been shown to prefer binding to porphyrin in a 2:1 stoichiometry, primarily due to its small atomic radius, but the Group XIII element will bind in a 1:1 ratio with corrole, a macromolecule with a structure similar to porphyrin but with a smaller N4 pocket. Boron porphyrins are of interest because of the unique geometric environment to which both boron and porphyrin are subjected upon B-N(pyrrole) bond formation. These new geometric motifs lead to novel reactivity, one of the most surprising examples being sterically-induced reductive coupling. Possible applications for boron porphyrins include BNCT delivery agents and OLED devices. Also of interest are molecules containing both boron and porphyrin moieties, but without B-N(pyrrole) bonds. Examples include diketonate-porphyrin compounds and dyads (two-component molecules) containing the classic BODIPY dye. Synthesis: Boron porphyrins first appeared in the literature during the 1960's and 1970's, in initially available literature the complex was never well characterized. The Boron porphyrin compounds can be synthesized either from the free base porphyrin or from a lithium porphyrin complex as starting material. Two representative examples are shown here. The first is the porphyrin free base reacted with BX3 in the presence of water. Synthesis: The second is Li2(ttp) reacted with BX3. The (BX2)2(por) can undergo reduction to form a B-B bond and eliminate X2, giving (BX)2(por). From here, the halides can be replaced with BuLi to give (B-Bu)2(por), reacted with alcohols to give (B-OR)2(por), or even undergo halogen abstraction via weakly-coordinating anions to give [(B-B)(por)]2+. Geometry: One of the major differences between p-block-element-centered porphyrins and transition-metal-centered porphyrins is the far smaller size of the interstitial atom, especially in the case of the first-row p-block. Other than protons, the next smallest atom known to bind to the central N4 pocket is lithium. The first two isolated lithium porphyrin complexes each reported a 2:1 metal to base ratio, and XRD suggested both lithium atoms reside out of the porphyrin plane.Boron has a covalent radius of 85 pm, significantly smaller than lithium's 133 pm. This suggests the porphyrin pocket is more likely to accommodate two boron atoms rather than one. Indeed, each boron porphyrin synthesized thus far has adopted a ratio of 2:1, with a range of orientations relative to the N4 plane. The boron atoms can exist in the same plane as the porphyrin (both with and without additional out-of-plane B-X bonds), or out of N4 plane in either a cisoid or transoid geometry. Geometry: This coordination motif is interesting because it introduces both boron and porphyrin to geometries they do not regularly adopt. Porphyrin readily binds to transition metals, which are capable of octahedral or square planar geometries. Boron, without available d-orbitals, typically adopts a trigonal planar or tetrahedral local bonding environment. Diboryl porphyrins, on the other hand, find boron in a pseudo-tetrahedral local environment and introduce a tetragonal distortion to the porphyrin, as can be seen in the DFT image above. Geometry: Corroles are distinct from porphyrins in that they contain one less methine to bridge between pyrrole units, creating a lower-symmetry compound and a smaller N4 pocket. For boron chemistry, this slightly smaller core allows for the possibility of binding to a single boron, whereas the porphyrin pocket has thus far always bound two. For such monoboryl corroles, DFT studies have suggested the boron preferentially binds to the dipyrromethene (A) site shown here, in which stability is attained by maximizing both BX—HN hydrogen bonding and BH—HN dihydrogen bonding, in addition to minimizing steric crowding. Geometry: The Brothers group has shown the stereochemical implications of comparing diboryl porphyrin with diboryl corrole: porphyrin prefers transoid orientation of the diboryl unit, whereas corrole prefers the cisoid orientation. Non-central boron-porphyrin interactions: Two examples of boron-containing compounds that have been linked to porphyrin are BODIPY and diketonate. Non-central boron-porphyrin interactions: The BODIPY chromophore acts as an antenna: it absorbs a broad range of UV-visible light, then emits at a wavelength compatible with porphyrin absorption, allowing for efficient energy transfer.This work has been extended to triads and to porphyrins with various core transition metals, some displaying multiphoton excitation.On the other hand, when boron difluoride β-diketonate is used for an antenna, the emission-absorption overlap is small and little change in the porphyrin's optical properties is observed. Though this chromophore is preferable to BODIPY in certain applications, it is not an effective antenna for porphyrin. Reactivity: Reduction One consequence of geometric strain on both the boron and the porphyrin moieties is unique reactivity. The Brothers group was able to demonstrate reductive coupling, wherein two BX2 units inside the porphyrin pocket become X-B-B-X, only occurs with X=Br and when the substrates are within the porphyrin pocket. DFT calculations show that for X=Cl or F, the reaction is endothermic and non-spontaneous. However, for X=Br, the reduction is spontaneous, which was consistent with experimental findings. Further, when the same reaction is simulated with two porphyrin halves ((dipyrromethene)BX2), it is non-spontaneous even for X=Br, suggesting the steric strain of the porphyrin ring to be the driving force behind the reduction reaction. Reactivity: Hydrolysis Hydrolysis is one of the primary reactions to occur in diboryl porphyrin complexes. In this reaction, RBOBR(por) reacts with water to exchange a B-OH bond for a B-R bond, liberating the R group. Hydrolysis products are important intermediates in the synthesis of the B-O-B(por) compounds from BX2(por) compounds. In fact, simply performing column chromatography on (BF2)2(por) on silica gives the partial hydrolysis product B2OF2(por). Reactivity: DFT computations show that hydrolysis, as in the scheme shown here, is energetically favorable (breaking of a relatively weak B-C bond, formation of a strong B-O bond, formation of benzene). However, only one of the two phenyl groups is observed to undergo hydrolysis. This suggests thermodynamic favorability is not the only factor at play. Rather, as Belcher et al. suggest, there is a significant steric component to this reaction. The boron in the porphyrin ring plane undergoes substitution, while the out-of-plane boron retains its phenyl bond. Reactivity: Halogen abstraction (Also see Geometry section above for a discussion of the B-B bonding environment.) Abstraction of halogens with two equivalents of sodium tetrakis[3,5-bis(trifluoromethyl)phenyl]borate gives the dication with both boron atoms within the porphyrin plane. Two reversible reduction waves occur at reduction potentials lower than that of the free base.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deep diving** Deep diving: Deep diving is underwater diving to a depth beyond the norm accepted by the associated community. In some cases this is a prescribed limit established by an authority, while in others it is associated with a level of certification or training, and it may vary depending on whether the diving is recreational, technical or commercial. Nitrogen narcosis becomes a hazard below 30 metres (98 ft) and hypoxic breathing gas is required below 60 metres (200 ft) to lessen the risk of oxygen toxicity. Deep diving: For some recreational diving agencies, "Deep diving", or "Deep diver" may be a certification awarded to divers that have been trained to dive to a specified depth range, generally deeper than 30 metres (98 ft). However, the Professional Association of Diving Instructors (PADI) defines anything from 18 to 30 metres (59 to 98 ft) as a "deep dive" in the context of recreational diving (other diving organisations vary), and considers deep diving a form of technical diving. In technical diving, a depth below about 60 metres (200 ft) where hypoxic breathing gas becomes necessary to avoid oxygen toxicity may be considered a deep dive. In professional diving, a depth that requires special equipment, procedures, or advanced training may be considered a deep dive. Deep diving: Deep diving can mean something else in the commercial diving field. For instance early experiments carried out by COMEX using heliox and trimix attained far greater depths than any recreational technical diving. One example being its "Janus 4" open-sea dive to 501 metres (1,640 ft) in 1977.The open-sea diving depth record was achieved in 1988 by a team of COMEX and French Navy divers who performed pipeline connection exercises at a depth of 534 metres (1,750 ft) in the Mediterranean Sea as part of the "Hydra 8" programme employing heliox and hydrox. The latter avoids the high-pressure nervous syndrome (HPNS) caused by helium and eases breathing due to its lower density. These divers needed to breathe special gas mixtures because they were exposed to very high ambient pressure (more than 54 times atmospheric pressure). Deep diving: An atmospheric diving suit (ADS) allows very deep dives of up to 700 metres (2,300 ft). These suits are capable of withstanding the pressure at great depth permitting the diver to remain at normal atmospheric pressure. This eliminates the problems associated with breathing pressurised gases. In 2006 Chief Navy Diver Daniel Jackson set a record of 610 metres (2,000 ft) in an ADS.On 20 November 1992 COMEX's "Hydra 10" experiment simulated a dive in an onshore hyperbaric chamber with hydreliox. Théo Mavrostomos spent two hours at an simulated depth of 701 metres (2,300 ft). Depth ranges in underwater diving: Assumed is the surface of the waterbody to be at or near sea level and underlies atmospheric pressure. Not included are the differing ranges of freediving – without breathing during a dive. Particular problems associated with deep dives: Deep diving has more hazards and greater risk than basic open-water diving. Nitrogen narcosis, the "narks" or "rapture of the deep", starts with feelings of euphoria and over-confidence but then leads to numbness and memory impairment similar to alcohol intoxication. Decompression sickness, or the "bends", can happen if a diver ascends too rapidly, when excess inert gas leaves solution in the blood and tissues and forms bubbles. These bubbles produce mechanical and biochemical effects that lead to the condition. The onset of symptoms depends on the severity of the tissue gas loading and may develop during ascent in severe cases, but is frequently delayed until after reaching the surface. Bone degeneration (dysbaric osteonecrosis) is caused by the bubbles forming inside the bones; most commonly the upper arm and the thighs. Deep diving involves a much greater danger of all of these, and presents the additional risk of oxygen toxicity, which may lead to convulsions underwater. Very deep diving using a helium-oxygen mixture (heliox) carries a risk of high-pressure nervous syndrome. Coping with the physical and physiological stresses of deep diving requires good physical conditioning.Using open-circuit scuba equipment, consumption of breathing gas is proportional to ambient pressure – so at 50 metres (164 ft), where the pressure is 6 bars (87 psi), a diver breathes six times as much as on the surface (1 bar, 14.5 psi). Heavy physical exertion makes the diver breathe even more gas, and gas becomes denser requiring increased effort to breathe with depth, leading to increased risk of hypercapnia – an excess of carbon dioxide in the blood. The need to do decompression stops increases with depth. A diver at 6 metres (20 ft) may be able to dive for many hours without needing to do decompression stops. At depths greater than 40 metres (131 ft), a diver may have only a few minutes at the deepest part of the dive before decompression stops are needed. In the event of an emergency, the diver cannot make an immediate ascent to the surface without risking decompression sickness. All of these considerations result in the amount of breathing gas required for deep diving being much greater than for shallow open water diving. The diver needs a disciplined approach to planning and conducting dives to minimise these additional risks. Particular problems associated with deep dives: Many of these problems are avoided by the use of surface supplied breathing gas, closed diving bells, and saturation diving, at the cost of logistical complexity, reduced maneuverability of the diver, and greater expense. Dealing with depth: Both equipment and procedures can be adapted to deal with the problems of greater depth. Usually the two are combined, as the procedures must be adapted to suit the equipment, and in some cases the equipment is needed to facilitate the procedures. Dealing with depth: Equipment adaptations for deeper diving The equipment used for deep diving depends on both the depth and the type of diving. Scuba is limited to equipment that can be carried by the diver or is easily deployed by the dive team, while surface-supplied diving equipment can be more extensive, and much of it stays above the water where it is operated by the diving support team. Dealing with depth: Scuba divers carry larger volumes of breathing gas to compensate for the increased gas consumption and decompression stops. Rebreathers, though more complex, manage gas much more efficiently than open-circuit scuba. Use of helium-based breathing gases such as trimix reduces nitrogen narcosis and reduces the toxic effects of oxygen at depth. A diving shot, a decompression trapeze, or a decompression buoy can help divers control their ascent and return to the surface at a position that can be monitored by their surface support team at the end of a dive. Decompression can be accelerated by using specially blended breathing gas mixtures containing lower proportions of inert gas. Surface supply of breathing gases reduces the risk of running out of gas. In-water decompression can be minimized by using dry bells and decompression chambers. Hot-water suits can prevent hypothermia due to the high heat loss when using helium-based breathing gases. Diving bells and lockout submersibles expose the diver to the direct underwater environment for less time, and provide a relatively safe shelter that does not require decompression, with a dry environment where the diver can rest, take refreshment, and if necessary, receive first aid in an emergency. Breathing gas reclaim systems reduce the cost of using helium-based breathing gases, by recovering and recycling exhaled surface supplied gas, analogous to rebreathers for scuba diving. Dealing with depth: The most radical equipment adaptation for deep diving is to isolate the diver from the direct pressure of the environment, using armoured atmospheric diving suits that allow diving to depths beyond those currently possible at ambient pressure. These rigid, articulated exoskeleton suits are sealed against water and withstand external pressure while providing life support to the diver for several hours at an internal pressure of approximately normal surface atmospheric pressure. This avoids the problems of inert gas narcosis, decompression sickness, barotrauma, oxygen toxicity, high work of breathing, compression arthralgia, high-pressure nervous syndrome and hypothermia, but at the cost of reduced mobility and dexterity, logistical problems due to the bulk and mass of the suits, and high equipment costs. Dealing with depth: Procedural adaptations for deeper diving Procedural adaptations for deep diving can be classified as those procedures for operating specialized equipment, and those that apply directly to the problems caused by exposure to high ambient pressures. Dealing with depth: The most important procedure for dealing with physiological problems of breathing at high ambient pressures associated with deep diving is decompression. This is necessary to prevent inert gas bubble formation in the body tissues of the diver, which can cause severe injury. Decompression procedures have been derived for a large range of pressure exposures, using a large range of gas mixtures. These basically entail a slow and controlled reduction in pressure during ascent by using a restricted ascent rate and decompression stops, so that the inert gases dissolved in the tissues of the diver can be eliminated harmlessly during normal respiration. Dealing with depth: Gas management procedures are necessary to ensure that the diver has access to suitable and sufficient breathing gas at all times during the dive, both for the planned dive profile and for any reasonably foreseeable contingency. Scuba gas management is logistically more complex than surface supply, as the diver must either carry all the gas, must follow a route where previously arranged gas supply depots have been set up (stage cylinders). or must rely on a team of support divers who will provide additional gas at pre-arranged signals or points on the planned dive. On very deep scuba dives or on occasions where long decompression times are planned, it is a common practice for support divers to meet the primary team at decompression stops to check if they need assistance, and these support divers will often carry extra gas supplies in case of need.Rebreather diving can reduce the bulk of the gas supplies for long and deep scuba dives, at the cost of more complex equipment with more potential failure modes, requiring more demanding procedures and higher procedural task loading. Dealing with depth: Surface supplied diving distributes the task loading between the divers and the support team, who remain in the relative safety and comfort of the surface control position. Gas supplies are limited only by what is available at the control position, and the diver only needs to carry sufficient bailout capacity to reach the nearest place of safety, which may be a diving bell or lockout submersible. Dealing with depth: Saturation diving is a procedure used to reduce the high-risk decompression a diver is exposed to during a long series of deep underwater exposures. By keeping the diver under high pressure for the whole job, and only decompressing at the end of several days to weeks of underwater work, a single decompression can be done at a slower rate without adding much overall time to the job. During the saturation period, the diver lives in a pressurized environment at the surface, and is transported under pressure to the underwater work site in a closed diving bell. Ultra-deep diving: Mixed gas Amongst technical divers, there are divers who participate in ultra-deep diving on scuba below 200 metres (656 ft). This practice requires high levels of training, experience, discipline, fitness and surface support. Only twenty-six people are known to have ever dived to at least 240 metres (790 ft) on self-contained breathing apparatus recreationally. The "Holy Grail" of deep scuba diving was the 300 metres (980 ft) mark, first achieved by John Bennett in 2001, and has only been achieved five times since. Ultra-deep diving: The difficulties involved in ultra-deep diving are numerous. Although commercial and military divers often operate at those depths, or even deeper, they are surface supplied. All of the complexities of ultra-deep diving are magnified by the requirement of the diver to carry (or provide for) their own gas underwater. These lead to rapid descents and "bounce dives". Unsurprisingly, this has led to extremely high mortality rates amongst those who practise ultra-deep diving. Notable ultra-deep diving fatalities include Sheck Exley, John Bennett, Dave Shaw and Guy Garman. Mark Ellyatt, Don Shirley and Pascal Bernabé were involved in serious incidents and were fortunate to survive their dives. Despite the extremely high mortality rate, the Guinness World Records continues to maintain a record for scuba diving (although the record for deep diving with compressed air has not been updated since 1999, given the high accident rate). Amongst those who do survive significant health issues are reported. Mark Ellyatt is reported to have suffered permanent lung damage; Pascal Bernabé (who was injured on his dive when a light on his mask imploded) and Nuno Gomes reported short to medium term hearing loss.Serious issues that confront divers engaging in ultra-deep diving on self-contained breathing apparatus include: Compression arthralgia Deep aching pain in the knees, shoulders, fingers, back, hips, neck, and ribs caused by exposure to high ambient pressure at a relatively high rate of descent (i.e., in "bounce dives"). Ultra-deep diving: High-pressure nervous syndrome (HPNS) HPNS, brought on by breathing helium under extreme pressure causes tremors, myoclonic jerking, somnolence, EEG changes, visual disturbance, nausea, dizziness, and decreased mental performance. Symptoms of HPNS are exacerbated by rapid compression, a feature common to ultra-deep "bounce" dives. Isobaric counterdiffusion (ICD) ICD is the diffusion of one inert gas into body tissues while another inert gas is diffusing out. It is a complication that can occur during decompression, and that can result in the formation or growth of bubbles without changes in the environmental pressure. Ultra-deep diving: Decompression algorithm There are no reliable decompression algorithms tested for such depths on the assumption of an immediate surfacing. Almost all decompression methodology for such depths is based upon saturation, and calculates ascent times in days rather than hours. Accordingly, ultra-deep dives are almost always a partly experimental basis.In addition, "ordinary" risks like gas reserves, hypothermia, dehydration and oxygen toxicity are compounded by extreme depth and exposure. Much technical equipment is simply not designed for the necessarily greater stresses at depths, and reports of key equipment (including submersible pressure gauges) imploding are not uncommon. Ultra-deep diving: Air A severe risk in ultra-deep air diving is deep water blackout, or depth blackout, a loss of consciousness at depths below 50 metres (160 ft) with no clear primary cause, associated with nitrogen narcosis, a neurological impairment with anaesthetic effects caused by high partial pressure of nitrogen dissolved in nerve tissue, and possibly acute oxygen toxicity. The term is not in widespread use at present, as where the actual cause of blackout is known, a more specific term is preferred. The depth at which deep water blackout occurs is extremely variable and unpredictable. Before the popular availability of trimix, attempts were made to set world record depths using air. The extreme risk of both narcosis and oxygen toxicity in the divers contributed to a high fatality rate in those attempting records. In his book, Deep Diving, Bret Gilliam chronicles the various fatal attempts to set records as well as the smaller number of successes. From the comparatively few who survived extremely deep air dives: In deference to the high accident rate, the Guinness World Records have ceased to publish records for deep air dives, after Manion's dive. Fatalities during depth record attempts: Maurice Fargues, a member of the GRS (Groupement de Recherches Sous-marines, Underwater Research Group headed by Jacques Cousteau), died in 1947 after losing consciousness at depth in an experiment to see how deep a scuba diver could go. He reached 120 m (394 ft) before failing to return line signals. He became the first diver to perish using an Aqua-Lung. Fatalities during depth record attempts: Hope Root died on 3 December 1953 off the coast of Miami Beach trying set a deep diving record of 125 m (410 ft) with an Aqua-Lung; he passed 152 m (500 ft) and was not seen again. Fatalities during depth record attempts: Archie Forfar and Ann Gunderson died on 11 December 1971 off the coast of Andros Island, while attempting to dive to 146 m (479 ft), which would have been the world record at the time. Their third team member, Jim Lockwood, only survived due to his use of a safety weight that dropped when he lost consciousness at 122 m (400 ft), causing him to start an uncontrolled ascent before being intercepted by a safety diver at a depth of around 91 m (300 ft). Sheck Exley, who was acting as another safety diver at 300 feet, inadvertently managed to set the depth record when he descended towards Forfar and Gunderson, who were both still alive at the 480-foot level, although completely incapacitated by narcosis. Exley was forced to give up his attempt at around 142 m (465 ft) when the narcosis very nearly overcame him as well. The bodies of Forfar and Gunderson were never recovered. Fatalities during depth record attempts: Sheck Exley died in 1994 at 268 m (879 ft) in an attempt to reach the bottom of Zacatón in a dive that would have extended his own world record (at the time) for deep diving. Dave Shaw died in 2005 in an attempt at the deepest ever body recovery and deepest ever dive on a rebreather at 270 m (886 ft). Brigitte Lenoir, planning to attempt the deepest dive ever made by a woman with a rebreather to 230 m (750 ft), died on 14 May 2010 in Dahab while ascending from a training dive at 147 m (482 ft). Guy Garman died on 15 August 2015 in an unsuccessful attempt to dive to 370 m (1,200 ft). The Virgin Island Police Department confirmed that Guy Garman's body was recovered on 18 August 2015. Theodora Balabanova died at Toroneos Bay, Greece, in September 2017 attempting to break the women's deep dive record with 231 m (758 ft). She did not complete the decompression stops and surfaced too early. Wacław Lejko attempting 275 m (902 ft) in Lake Garda, died in September 2017. His body was recovered with a ROV at 230 m (750 ft). Adam Krzysztof Pawlik, attempting a 316 m (1,037 ft) dive in Lake Garda, died on 18 October 2018. His body was located at 284 m (932 ft). Sebastian Marczewski reached the target depth of 275 m (902 ft) in Lake Garda but his tanks became entangled in his ascent line at 150 m (490 ft). He died on 6 July 2019.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR2A1** OR2A1: Olfactory receptor 2A1/2A42 is a protein that in humans is encoded by the OR2A1 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anthropological science fiction** Anthropological science fiction: The anthropologist Leon E. Stover says of science fiction's relationship to anthropology: "Anthropological science fiction enjoys the philosophical luxury of providing answers to the question "What is man?" while anthropology the science is still learning how to frame it".: 472  The editors of a collection of anthropological SF stories observed: Anthropology is the science of man. It tells the story from ape-man to spaceman, attempting to describe in detail all the epochs of this continuing history. Writers of fiction, and in particular science fiction, peer over the anthropologists' shoulders as the discoveries are made, then utilize the material in fictional works. Where the scientist must speculate reservedly from known fact and make a small leap into the unknown, the writer is free to soar high on the wings of fancy.: 12  Charles F. Urbanowicz, Professor of Anthropology, California State University, Chico has said of anthropology and SF: Anthropology and science fiction often present data and ideas so bizarre and unusual that readers, in their first confrontation with both, often fail to appreciate either science fiction or anthropology. Intelligence does not merely consist of fact, but in the integration of ideas -- and ideas can come from anywhere, especially good science fiction! The difficulty in describing category boundaries for 'anthropological SF' is illustrated by a reviewer of an anthology of anthropological SF, written for the journal American Anthropologist, which warned against too broad a definition of the subgenre, saying: "Just because a story has anthropologists as protagonists or makes vague references to 'culture' does not qualify it as anthropological science fiction, although it may be 'pop' anthropology." The writer concluded the book review with the opinion that only "twelve of the twenty-six selections can be considered as examples of anthropological science fiction.": 798 This difficulty of categorization explains the exclusions necessary when seeking the origins of the subgenre. Thus: Nineteenth-century utopian writings and lost-race sagas notwithstanding, anthropological science fiction is generally considered a late-twentieth-century phenomenon, best exemplified by the work of writers such as Ursula K. Le Guin, Michael Bishop, Joanna Russ, Ian Watson, and Chad Oliver.: 243  Again, questions of description are not simple as Gary Westfahl observes: ... others present hard science fiction as the most rigorous and intellectually demanding form of science fiction, implying that those who do not produce it are somehow failing to realize the true potential of science fiction. This is objectionable ...; writers like Chad Oliver and Ursula K. Le Guin, for example, bring to their writing a background in anthropology that makes their extrapolated aliens and future societies every bit as fascinating and intellectually involving as the technological marvels and strange planets of hard science fiction. Because anthropology is a social science, not a natural science, it is hard to classify their works as hard science fiction, but one cannot justly construe this observation as a criticism.: 189  Despite being described as a "late-twentieth-century phenomenon" (above) anthropological SF's roots can be traced further back in history. H. G. Wells (1866–1946) has been called "the Shakespeare of SF": 133  and his first anthropological story has been identified by anthropologist Leon E. Stover as "The Grisly Folk". Stover notes that this story is about Neanderthal Man, and writing in 1973,: 472  continues: "[the story] opens with the line 'Can these bones live?' Writers are still trying to make them live, the latest being Golding. Some others in between have been de Camp, Del Rey, Farmer, and Klass." A more contemporary example of the Neanderthal as subject is Robert J. Sawyer's trilogy "The Neanderthal Parallax" – here "scientists from an alternative earth in which Neanderthals superseded homo sapiens cross over to our world. The series as a whole allows Sawyer to explore questions of evolution and humanity's relationship to the environment.": 317 Authors and works: Chad Oliver Anthropological science fiction is best exemplified by the work of writers such as Ursula K. Le Guin, Michael Bishop, Joanna Russ, Ian Watson, and Chad Oliver. Of this pantheon, Oliver is alone in being also a professional anthropologist, author of academic tomes such as Ecology and Cultural Continuity as Contributing Factors in the Social Organization of the Plains Indians (1962) and The Discovery of Anthropology (1981) in addition to his anthropologically-inflected science fiction. Although he tried, in a superficial way, to separate these two aspects of his career, signing his anthropology texts with his given name "Symmes C. Oliver", he nonetheless saw them as productively interrelated. "I like to think," he commented in a 1984 interview, "that there's a kind of feedback ... that the kind of open-minded perspective in science fiction conceivably has made me a better anthropologist. And on the other side of the coin, the kind of rigor that anthropology has, conceivably has made me a better science fiction writer.": 243 Thus "Oliver's Unearthly Neighbors (1960) highlights the methods of ethnographic fieldwork by imagining their application to a nonhuman race on another world. His Blood's a Rover (1955 [1952]) spells out the problems of applied anthropology by sending a technical-assistance team to an underdeveloped planet. His Rite of Passage (1966 [1954]) is a lesson in the patterning of culture, how humans everywhere unconsciously work out a blueprint for living. Anthropological wisdom is applied to the conscious design of a new blueprint for American society in his Mother of Necessity (1972 [1955])". Oliver's The Winds of Time is a "science fiction novel giving an excellent introduction to the field methods of descriptive linguistics".: 96 In 1993 a journal of SF criticism requested from writers and critics of SF a list of their 'most neglected' writers, and Chad Oliver was listed in three replies. Among the works chosen were: Shadows in the Sun, Unearthly Neighbors, and The Shores of Another Sea. One respondent declared that "Oliver's anthropological SF is the precursor of more recent novels by Ursula K. Le Guin, Michael Bishop, and others"; another that "Chad Oliver was developing quiet, superbly crafted anthropological fictions long before anyone had heard of Le Guin; maybe his slight output and unassuming plots (and being out of print) have caused people to overlook the carefully thought-out ideas behind his fiction".In the novel Shadows in the Sun the protagonist, Paul Ellery, is an anthropologist doing field work in the town of Jefferson Springs, Texas—a place where he discovers extraterrestrial aliens. It has been remarked that: Not only are these aliens comprehensible in anthropological terms, but it is anthropology, rather than the physical sciences, that promises a solution to the problem of alien colonization. According to the science of anthropology, every society, regardless of its level of development, has to functionally meet certain human needs. The aliens of Jefferson Springs "had learned, long ago, that it was the cultural core that counted-the deep and underlying spirit and belief and knowledge, the tone and essence of living. Once you had that, the rest was window dressing. Not only that, but the rest, the cultural superstructure, was relatively equal in all societies (115; emphasis in original). For Ellery, the aliens are not "supermen" (a favorite Campbellian conceit): despite their fantastic technologies, they are ultimately ordinary people with the expected array of weaknesses – laziness, factionalism, arrogance – whose cultural life is as predictable as any Earth society's. Since they are not superior, they are susceptible to defeat, but the key lies not in the procurement of advanced technologies, but in the creative cultural work of Earth people themselves.: 248  A reviewer of The Shores of Another Sea finds the book "curiously flat despite its exploration of an almost mythical, and often horrific, theme".: 202  The reviewer's reaction is not surprising because, as Samuel Gerald Collins points out in the 'New Wave Anthropology' section of his comprehensive review of Chad Oliver's work: "In many ways, the novel is very much unlike Oliver's previous work; there is little moral resolution, nor is anthropology of much help in determining what motivates the aliens. In striking contrast to the familiar chumminess of the aliens in Shadows in the Sun and The Winds of Time, humans and aliens in Shores of Another Sea systematically misunderstand one another.": 253  Collins continues: In fact, the intervening decade between Oliver's field research and the publication of Shores [1971] had been one of critical self-reflection in the field of anthropology. In the United States, qualms about the Vietnam war, together with evidence that anthropologists had been employed as spies and propagandists by the US government, prompted critiques of anthropology's role in systems of national and global power. Various strains of what came to be known as dependency theory disrupted the self-congratulatory evolutionism of modernization models, evoking and critiquing a world system whose political economy structurally mandated unequal development. Less narrowly academic works such as Vine Deloria, Jr.'s, Custer Died for Your Sins (1969), combined with the efforts of civil-rights groups like the American Indian Movement, skewered anthropology's paternalist pretensions. Two major collections of essays -- Dell Hymes's Reinventing Anthropology (1972) and Talal Asad's Anthropology and the Colonial Encounter (1973) -- explored anthropology's colonial legacy and precipitated a critical engagement with the ethics and politics of ethnographic representation.: 253  At the conclusion of his essay, discussing Chad Oliver's legacy Collins says: The lesson of Chad Oliver for sf is that his Campbell-era commitments to the power of technology, rational thinking, and the evolutionary destiny of "humanity" came to seem an enshrinement of a Western imperialist vision that needed to be transcended, through a rethinking of otherness driven by anthropological theory and practice. Above all, Oliver's career speaks to many of the shared impulses and assumptions of anthropology and sf, connections that have only grown more multifarious and complex since his death in 1993.: 257 Ursula K. Le Guin It has often been observed that Ursula K. Le Guin's interest in anthropology and its influence on her fiction derives from the influence of both her mother Theodora Kroeber, and of her father, Alfred L. Kroeber.: 410 : 61 : 1 Warren G. Rochelle in his essay on Le Guin notes that from her parents she: acquired the "anthropological attitude" necessary for the observation of another culture – or for her, the invention of another culture: the recognition and appreciation of cultural diversity, the necessity to be a "close and impartial observer", who is objective, yet recognizes the inescapable subjectivity that comes with participation in an alien culture.: 410  Another critic has observed that Le Guin's "concern with cultural biases is evident throughout her literary career", and continues, In The Word for World is Forest (1972), for example, she explicitly demonstrates the failure of colonialists to comprehend other cultures, and shows how the desire to dominate and control interferes with the ability to perceive the other. Always Coming Home (1985) is an attempt to allow another culture to speak for itself through songs and music (available in cassette form), writings, and various unclassifiable fragments. Like a documentary, the text presents the audience with pieces of information that they can sift through and examine. But unlike a traditional anthropological documentary, there is no "voice-over" to interpret that information and frame it for them. The absence of "voice-over" commentary in the novel forces the reader to draw conclusions rather than rely on a scientific analysis which would be tainted with cultural blind spots. The novel, consequently, preserves the difference of the alien culture and removes the observing neutral eye from the scene until the very end. Authors and works: Le Guin's novel The Left Hand of Darkness has been called "the most sophisticated and technically plausible work of anthropological science fiction, insofar as the relationship of culture and biology is concerned",: 472  and also rated as "perhaps her most notable book".: 244  This novel forms part of Le Guin's Hainish Cycle (so termed because it develops as a whole "a vast story about diverse planets seeded with life by the ancient inhabitants of Hain").: 46–47  The series is "a densely textured anthropology, unfolding through a cycle of novels and stories and actually populated by several anthropologists and ethnologists".": 183  Le Guin employs the SF trope of inter-stellar travel which allows for fictional human colonies on other worlds developing widely differing social systems. For example, in The Left Hand of Darkness "a human envoy to the snowbound planet of Gethan struggles to understand its sexually ambivalent inhabitants".: 180  Published in 1969, this Le Guin novel: is only one of many subsequent novels that have dealt with androgyny and multiple gender/sex identities through a variety of approaches, from Samuel R. Delany's Triton (1976), Joanna Russ's Female Man (1975), Marge Piercy's Woman at the Edge of Time (1976), Marion Zimmer Bradley's Darkover series (1962–1996) and Octavia Butler's Xenogenesis Trilogy (1987-89). Though innovative in its time, it is not its construction of androgyny itself that is remarkable about Le Guin's text. Rather, it is her focus on the way that the androgynes are perceived and how they are constructed within a particular discourse, that of scientific observation. This discourse is manifested specifically in the language of anthropology, the social sciences as a whole, and diplomacy. This focus, in turn, places Le Guin's novel within a body of later works – such as Mary Gentle's Golden Witchbreed novels (1984-87) and C. J. Cherryh's Foreigner series (1994-96) – that deal with an outside observer's arrival on an alien planet, all of which indicate the difficulty of translating the life-style of an alien species into a language and cultural experience that is comprehensible. As such, these texts provide critiques of anthropological discourse that are similar to Trinh Minh-ha's attempts to problematize the colonialist beginnings and imperialistic undertones of anthropology as a science. Authors and works: Geoffery Samuel has pointed out some specific anthropological aspect to Le Guin's fiction, noting that: the culture of the people of Gethen in The Left Hand of Darkness clearly owes a lot to North-West Coast Indian and Eskimo culture; the role of dreams of Athshe (in The Word for World is Forest) is very reminiscent of that described for the Temiar people of Malaysia; and the idea of a special vocabulary of terms of address correlated with a hierarchy of knowledge, in City of Illusions, recalls the honorific terminologies of many Far Eastern cultures (such as Java or Tibet). Authors and works: However, Fredric Jameson says of The Left Hand of Darkness that the novel is "constructed from a heterogeneous group of narratives modes ...", and that: ... we find here intermingled: the travel narrative (with anthropological data), the pastiche myth, the political novel (in the restricted sense of the drama of court intrigue), straight SF (the Hainish colonization, the spaceship in orbit around Gethen's sun), Orwellian dystopia ..., adventure story ..., and finally even, something like a multiracial love story (the drama of communication between the two cultures and species).: 267  Similarly Adam Roberts warns against a too narrow an interpretation of Le Guin's fiction, pointing out that her writing is always balanced and that "balance as such forms one of her major concerns. Both Left Hand and The Dispossed (1974) balance form to theme, of symbol to narration, flawlessly".: 244–245  Nevertheless, there is no doubt that the novel The Left Hand of Darkness is steeped in anthropological thought, with one academic critic noting that "the theories of [French anthropologist] Claude Lévi-Strauss provide an access to understanding the workings of the myths" in the novel. Later in the essay the author explains: Unlike the openended corpus of actual myths that anthropologists examine, the corpus of myths in The Left Hand of Darkness is closed and complete. Therefore, it is possible to analyze the entire set of Gethenian myths and establish the ways in which they are connected. Kinship exchange, in the Lévi-Straussian sense, comprises their dominant theme. In them, Le Guin articulates the theme of exchange by employing contrary images – heat and cold, dark and light, home and exile, name and namelessness, life and death, murder and sex – so as finally to reconcile their contrariety. The myths present wholeness, or unity, as an ideal; but that wholeness is never merely the integrity of an individual who stands apart from society. Instead, it consists of the tenuous and temporary integration of individuals into social units.: 181
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compact Reconnaissance Imaging Spectrometer for Mars** Compact Reconnaissance Imaging Spectrometer for Mars: The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) was a visible-infrared spectrometer aboard the Mars Reconnaissance Orbiter searching for mineralogic indications of past and present water on Mars. The CRISM instrument team comprised scientists from over ten universities and was led by principal investigator Scott Murchie. CRISM was designed, built, and tested by the Johns Hopkins University Applied Physics Laboratory. Objectives: CRISM was being used to identify locations on Mars that may have hosted water, a solvent considered important in the search for past or present life on Mars. In order to do this, CRISM was mapping the presence of minerals and chemicals that may indicate past interaction with water - low-temperature or hydrothermal. These materials include iron and oxides, which can be chemically altered by water, and phyllosilicates and carbonates, which form in the presence of water. All of these materials have characteristic patterns in their visible-infrared reflections and were readily seen by CRISM. In addition, CRISM was monitoring ice and dust particulates in the Martian atmosphere to learn more about its climate and seasons. Instrument overview: CRISM measured visible and infrared electromagnetic radiation from 362 to 3920 nanometers in 6.55 nanometer increments. The instrument had two modes, a multispectral untargeted mode and a hyperspectral targeted mode. In the untargeted mode, CRISM reconnoiters Mars, recording approximately 50 of its 544 measurable wavelengths at a resolution of 100 to 200 meters per pixel. In this mode CRISM mapped half of Mars within a few months after aerobraking and most of the planet after one year. The objective of this mode is to identify new scientifically interesting locations that could be further investigated. In targeted mode, the spectrometer measured energy in all 544 wavelengths. When the MRO spacecraft is at an altitude of 300 km, CRISM detects a narrow but long strip on the Martian surface about 18 kilometers across and 10,800 kilometers long. The instrument swept this strip across the surface as MRO orbits Mars to image the surface. Instrument design: The data collecting part of CRISM was called the Optical Sensor Unit (OSU) and consisted of two spectrographs, one that detected visible light from 400 to 830 nm and one that detected infrared light from 830 to 4050 nm. The infrared detector was cooled to –173° Celsius (–280° Fahrenheit) by a radiator plate and three cryogenic coolers. While in targeted mode, the instrument gimbals in order to continue pointing at one area even though the MRO spacecraft is moving. The extra time collecting data over a targeted area increases the signal-to-noise ratio as well as the spatial and spectral resolution of the image. This scanning ability also allowed the instrument to perform emission phase functions, viewing the same surface through variable amounts of atmosphere, which would be used to determine atmospheric properties. The Data Processing Unit (DPU) of CRISM performs in-flight data processing including compressing the data before transmission. Investigations: CRISM began its exploration of Mars in late 2006. Results from the OMEGA visible/near-infrared spectrometer on Mars Express (2003–present), the Mars Exploration Rovers (MER; 2003–2019), the TES thermal emission spectrometer on Mars Global Surveyor (MGS; 1997-2006), and the THEMIS thermal imaging system on Mars Odyssey (2004–present) helped to frame the themes for CRISM's exploration: Where and when did Mars have persistently wet environments? What is the composition of Mars' crust? What are the characteristics of Mars' modern climate?In November 2018, it was announced that CRISM had fabricated some additional pixels representing the minerals alunite, kieserite, serpentine and perchlorate. The instrument team found that some false positives were caused by a filtering step when the detector switches from a high luminosity area to shadows. Reportedly, 0.05% of the pixels were indicating perchlorate, now known to be a false high estimate by this instrument. However, both the Phoenix lander and the Curiosity rover measured 0.5% perchlorates in the soil, suggesting a global distribution of these salts. Perchlorate is of interest to astrobiologists, as it sequesters water molecules from the atmosphere and reduces its freezing point, potentially creating thin films of watery brine that —although toxic to most Earth life— it could potentially offer habitats for native Martian microbes in the shallow subsurface. (See: Life on Mars#Perchlorates) Persistently wet environments Aqueous minerals are minerals that form in water, either by chemical alteration of pre-existing rock or by precipitation out of solution. The minerals indicate where liquid water existed long enough to react chemically with rock. Which minerals form depends on temperature, salinity, pH, and composition of the parent rock. Which aqueous minerals are present on Mars therefore provides important clues to understanding past environments. The OMEGA spectrometer on the Mars Express orbiter and the MER rovers both uncovered evidence for aqueous minerals. OMEGA revealed two distinct kinds of past aqueous deposits. The first, containing sulfates such as gypsum and kieserite, is found in layered deposits of Hesperian age (Martian middle age, roughly from 3.7 to 3 billion years ago). The second, rich in several different kinds of phyllosilicates, instead occurs rocks of Noachian age (older than about 3.7 billion years). The different ages and mineral chemistries suggest an early water-rich environment in which phyllosilicates formed, followed by a dryer, more saline and acidic environment in which sulfates formed. The MER Opportunity rover spent years exploring sedimentary rocks formed in the latter environment, full of sulfates, salts, and oxidized iron minerals. Investigations: Soil forms from parent rocks through physical disintegration of rocks and by chemical alteration of the rock fragments. The types of soil minerals can reveal if the environment was cool or warm, wet or dry, or whether the water was fresh or salty. Because CRISM is able to detect many minerals in the soil or regolith, the instrument is being used to help decipher ancient Martian environments. CRISM has found a characteristic layering pattern of aluminum-rich clays overlying iron- and magnesium-rich clays in many areas scattered through Mars' highlands. Surrounding Mawrth Vallis, these "layered clays" cover hundreds of thousands of square kilometers. Similar layering occurs near the Isidis basin, in the Noachian plains surrounding Valles Marineris, and in Noachian plains surrounding the Tharsis plateau. The global distribution of layered clays suggests a global process. Layered clays are late Noachian in age, dating from the same time as water-carved valley networks. The layered clay composition is similar to what is expected for soil formation on Earth - a weathered upper layer leached of soluble iron and magnesium, leaving an insoluble aluminum-rich residue, with a lower layer that still retains its iron and magnesium. Some researchers have suggested that the Martian clay "layer cake" was created by soil-forming processes, including rainfall, at the time that valley networks formed. Lake and marine environments on Earth are favorable for fossil preservation, especially where the sediments they left behind are rich in carbonates or clays. Hundreds of highland craters on Mars have horizontally layered, sedimentary rocks that may have formed in lakes. CRISM has taken many targeted observations of these rocks to measure their mineralogy and how the minerals vary between layers. Variation between layers helps us to understand the sequence of events that formed the sedimentary rocks. The Mars Orbiter Camera found that where valley networks empty into craters, commonly the craters contain fan-shaped deposits. However it was not completely clear if the fans formed by sediment deposition on dry crater floors (alluvial fans) or in crater lakes (deltas). CRISM discovered that in the fans' lowermost layers, there are concentrated deposits of clay. More clay occurs beyond the end of the fans on the crater floors, and in some cases there is also opal. On Earth, the lowermost layers of deltas are called bottom set beds, and they are made of clays that settled out of inflowing river water in quiet, deep parts of the lakes. This discovery supports the idea that many fans formed in crater lakes where, potentially, evidence for habitable environments could be preserved. Investigations: Not all ancient Martian lakes were fed by inflowing valley networks. CRISM discovered several craters on the western slope of Tharsis that contain "bathtub rings" of sulfate minerals and a kind of phyllosilicate called kaolinite. Both minerals can form together by precipitating out of acidic, saline water. These craters lack inflowing valley networks, showing that they were not fed by rivers - instead, they must have been fed by inflowing groundwater. Investigations: The identification of hot spring deposits was a priority for CRISM, because hot springs would have had energy (geothermal heat) and water, two basic requirements for life. One of the signatures of hot springs on Earth is deposits of silica. The MER Spirit rover explored a silica-rich deposit called "Home Plate" that is thought to have formed in a hot spring. CRISM has discovered other silica-rich deposits in many locations. Some are associated with central peaks of impact craters, which are sites of heating driven by meteor impact. Silica has also been identified on the flanks of volcanic inside the caldera of the Syrtis Major shield volcano, forming light-colored mounds that look like scaled-up versions of Home Plate. Elsewhere, in the westernmost parts of Valles Marineris, near the core of the Tharsis volcanic province, there are sulfate and clay deposits suggestive of "warm" springs. Hot spring deposits are one of the most promising areas on Mars to search for evidence for past life.One of the leading hypotheses for why ancient Mars was wetter than today is that a thick, carbon dioxide-rich atmosphere created a global greenhouse, that warmed the surface enough for liquid water to occur in large amounts. Carbon dioxide ice in today's polar caps is too limited in volume to hold that ancient atmosphere. If a thick atmosphere ever existed, it was either blown into space by solar wind or impacts, or reacted with silicate rocks to become trapped as carbonates in Mars' crust. One of the goals that drove CRISM's design was to find carbonates, to try to solve this question about what happened to Mars' atmosphere. And one of CRISM's most important discoveries was the identification of carbonate bedrock in Nili Fossae in 2008. Soon thereafter, landed missions to Mars started identifying carbonates on the surface; the Phoenix Mars lander found between 3–5 wt% calcite (CaCO3) at its northern lowland landing site, while the MER Spirit rover identified outcrops rich in magnesium-iron carbonate (16–34 wt%) in the Columbia Hills of Gusev crater. Later CRISM analyses identified carbonates in the rim of Huygens crater which suggested that there could be extensive deposits of buried carbonates on Mars. However, a study by CRISM scientists estimated that all of the carbonate rock on Mars holds less than the present Martian atmosphere worth of carbon dioxide. They determined that if a dense ancient Martian atmosphere did exist, it is probably not trapped in the crust. Investigations: Crustal composition Understanding the composition of Mars' crust and how it changed with time tells us about many aspects of Mars' evolution as a planet, and was a major goal of CRISM. Remote and landed measurements prior to CRISM, and analysis of Martian meteorites, all suggest that the Martian crust is made mostly of basaltic igneous rock composed mostly of feldspar and pyroxene. Images from the Mars Orbiter Camera on MGS showed that in some places the upper few kilometers of the crust is composed of hundreds of thin volcanic lava flows. TES and THEMIS both found mostly basaltic igneous rock, with scattered olivine-rich and even some quartz-rich rocks. Investigations: The first recognition of widespread sedimentary rock on Mars came from the Mars Orbiter Camera which found that several areas of the planet - including Valles Marineris and Terra Arabia - have horizontally layered, light-toned rocks. Follow-up observations of those rocks' mineralogy by OMEGA found that some are rich in sulfate minerals, and that other layered rocks around Mawrth Vallis are rich in phyllosilicates. Both class of minerals are signatures of sedimentary rocks. CRISM had used its improved spatial resolution to look for other deposits of sedimentary rock on Mars' surface, and for layers of sedimentary rock buried between layers of volcanic rock in Mars' crust. Investigations: Modern climates To understand Mars' ancient climate, and whether it might have created environments habitable for life, first we need to understand Mars' climate today. Each mission to Mars has made new advances in understanding its climate. Mars has seasonal variations in the abundances of water vapor, water ice clouds and hazes, and atmospheric dust. During southern summer, when Mars is closest to the Sun (at perihelion), solar heating can raise massive dust storms. Regional dust storms - ones having a 1000-kilometer scale - show surprising repeatability Mars-year to Mars-year. Once every decade or so, they grow into global-scale events. In contrast, during northern summer when Mars is furthest from the Sun (at aphelion), there is an equatorial water-ice cloud belt and very little dust in the atmosphere. Atmospheric water vapor varies in abundance seasonally, with the greatest abundances in each hemisphere's summer after the seasonal polar caps have sublimated into the atmosphere. During winter, both water and carbon dioxide frost and ices form on Mars' surface. These ices form the seasonal and residual polar caps. The seasonal caps - which form each autumn and sublimate each spring - are dominated by carbon dioxide ice. The residual caps - which persist year after year - consist mostly of water ice at the north pole and water ice with a thin veneer (a few 10's of meters thick) of carbon dioxide ice at the south pole. Investigations: Mars' atmosphere is so thin and wispy that solar heating of dust and ice in the atmosphere - not heating of the atmospheric gases - is more important in driving weather. Small, suspended particles of dust and water ice - aerosols - intercept 20–30% of incoming sunlight, even under relatively clear conditions. So variations in the amounts of these aerosols have a huge influence on climate. CRISM had taken three major kinds of measurements of dust and ice in the atmosphere: targeted observations whose repeated views of the surface provide a sensitive estimate of aerosol abundance; special global grids of targeted observations every couple of months designed especially to track spatial and seasonal variations; and scans across the planet's limb to show how dust and ice vary with height above the surface. Investigations: The south polar seasonal cap has a bizarre variety of bright and dark streaks and spots that appear during spring, as carbon dioxide ice sublimates. Prior to MRO there were various ideas for processes that could form these strange features, a leading model being carbon dioxide geysers. CRISM had watched the dark spots grow during southern spring, and found that bright streaks forming alongside the dark spots are made of fresh, new carbon dioxide frost, pointing like arrows back to their sources - the same sources as the dark spots. The bright streaks probably form by expansion, cooling, and freezing of the carbon dioxide gas, forming a "smoking gun" to support the geyser hypothesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**International Journal on Artificial Intelligence Tools** International Journal on Artificial Intelligence Tools: The International Journal on Artificial Intelligence Tools was founded in 1992 and is published by World Scientific. It covers research on artificial intelligence (AI) tools or tools that use AI, including architectures, languages and algorithms. Topics include AI in Bioinformatics, Cognitive Informatics, Knowledge-Based/Expert Systems and Object-Oriented Programming for AI. Abstracting and indexing: The journal is abstracted and indexed in: Inspec Science Citation Index Expanded ISI Alerting Services CompuMath Citation Index Current Contents/Engineering, Computing, and Technology
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P-form electrodynamics** P-form electrodynamics: In theoretical physics, p-form electrodynamics is a generalization of Maxwell's theory of electromagnetism. Ordinary (via. one-form) Abelian electrodynamics: We have a one-form A , a gauge symmetry A→A+dα, where α is any arbitrary fixed 0-form and d is the exterior derivative, and a gauge-invariant vector current J with density 1 satisfying the continuity equation d⋆J=0, where ⋆ is the Hodge star operator. Alternatively, we may express J as a closed (n − 1)-form, but we do not consider that case here. F is a gauge-invariant 2-form defined as the exterior derivative F=dA .F satisfies the equation of motion d⋆F=⋆J (this equation obviously implies the continuity equation). This can be derived from the action S=∫M[12F∧⋆F−A∧⋆J], where M is the spacetime manifold. p-form Abelian electrodynamics: We have a p-form B , a gauge symmetry B→B+dα, where α is any arbitrary fixed (p − 1)-form and d is the exterior derivative, and a gauge-invariant p-vector J with density 1 satisfying the continuity equation d⋆J=0, where ⋆ is the Hodge star operator. Alternatively, we may express J as a closed (n − p)-form. C is a gauge-invariant (p + 1)-form defined as the exterior derivative C=dB .B satisfies the equation of motion d⋆C=⋆J (this equation obviously implies the continuity equation). This can be derived from the action S=∫M[12C∧⋆C+(−1)pB∧⋆J] where M is the spacetime manifold. Other sign conventions do exist. The Kalb–Ramond field is an example with p = 2 in string theory; the Ramond–Ramond fields whose charged sources are D-branes are examples for all values of p. In 11-dimensional supergravity or M-theory, we have a 3-form electrodynamics. Non-abelian generalization: Just as we have non-abelian generalizations of electrodynamics, leading to Yang–Mills theories, we also have nonabelian generalizations of p-form electrodynamics. They typically require the use of gerbes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MBASIC** MBASIC: MBASIC is the Microsoft BASIC implementation of BASIC for the CP/M operating system. MBASIC is a descendant of the original Altair BASIC interpreters that were among Microsoft's first products. MBASIC was one of the two versions of BASIC bundled with the Osborne 1 computer. The name "MBASIC" is derived from the disk file name MBASIC.COM of the BASIC interpreter. Environment: MBASIC version 5 required a CP/M system with at least 28 kB of random access memory (RAM) and at least one diskette drive. Unlike versions of Microsoft BASIC-80 that were customized by home computer manufacturers to use the particular hardware features of the computer, MBASIC relied only on the CP/M operating system calls for all input and output. Only the CP/M console (screen and keyboard), line printer, and disk devices were available. Environment: MBASIC in the uncustomized form had no functions for graphics, color, joysticks, mice, serial communications, networking, sound, or even a real-time clock function. MBASIC did not fully support the features of the host CP/M operating system, for example, it did not support CP/M's user areas for organizing files on a diskette. Since CP/M systems were typically single-user and stand alone, there was no provision for file or record locking, or any form of multitasking. Apart from these limitations, MBASIC was considered at the time to be a powerful and useful implementation of BASIC. Features: Language system MBASIC is an interpreter. Program source text was stored in memory in tokenized form, with BASIC keywords replaced by one-byte tokens which saved memory space and speeded execution. Any line prefixed with a line number was stored as program text; BASIC statements not prefixed with a line number were executed immediately as commands. Programs could be listed on the screen for editing, or saved to disk in either a compressed binary format or as plain ASCII text. Every source line was identified with a number, which could be used as the target of a GOTO or GOSUB transfer. Only line editing commands were provided. It was often beneficial to save a program as plain text and edit it with a full featured editor. Features: Program text, variables, disk buffers and the CP/M operating system itself all had to share the 64 kilobyte address space of the 8080 processor. Typically when first starting MBASIC there would be less than 32 kB memory available for programs and data, even on a machine equipped with a full 64 kilobytes of RAM. Comment lines, prefixed with the REM keyword or an apostrophe, could be placed in the program text but took up valuable memory space, which discouraged BASIC users from fully documenting their code. To allow larger and more complex programs to be run, later versions of MBASIC supported functions that allowed portions of program text to be read in and executed under program control (the " CHAIN " and MERGE statements). No support for "shell" command execution was provided, though this functionality could be duplicated by a determined programmer. Features: A particular advantage of MBASIC was the full-text error messages provided for syntax and run-time errors. MBASIC also had a "trace" function that displayed line numbers as they were executed. While this occupied the same screen space as normal program output, it was useful for detecting conditions such as endless loops. Features: Files and input/output Data could be read and stored to disk as either sequential files (delimited by the CP/M convention of CR/LF at the end of each line) or else as fixed-record-length random access files, which, given a sufficiently determined programmer, could be used to perform database-type record manipulation. The Microsoft Binary Format for floating point numbers was proprietary to the implementation, which meant that data could only be interchanged with other programs using ASCII text representation or else with extensive programming to convert the binary format. Features: Variables and data types MBASIC supported the following data types: 8-bit character data, in strings of length 0 to 255 characters; 16-bit integers; 32-bit floating point (single precision), equivalent to six decimal digits, with a two-digit exponent; 64-bit floating point (double precision), equivalent to sixteen decimal digits, with a two-digit exponent.String operators included substring selection, concatenation, assignment, and testing for equality. Features: Arrays of the above types were allowed with up to 7 dimensions, but no functions or operators worked on arrays; for example, there was no assignment of arrays. Unlike some other BASIC implementations of the time, MBASIC did not provide support for matrix operations, complex numbers, or a decimal (BCD) data type for financial calculations. All floating point operations were carried out in software since typical CP/M systems did not have floating point hardware. The built-in mathematics functions (sine, cosine, tangent, natural log, exponential, square root) only gave single precision results. A software pseudorandom number generator was provided; this relied on the user to key in a seed number to obtain a sequence of numbers useful for games and some simulations. MBASIC permitted but did not require the LET keyword for assignment statements. Features: Early versions of BASIC on microcomputers were infamous for one- or two-character variable names, which made the meanings of variables difficult to recall in complex programs. MBASIC version 5 allowed identifiers up to 40 characters long, which permitted programmers to give variables readable names. Features: Program flow control Program flow control in MBASIC was controlled by IF...THEN...ELSE... conditional tests, WHILE...WEND loops, and GOTO and GOSUB instructions. No CASE statement was available, although an ON...GOTO... (computed GOTO) provided multi-way branches. Subroutines had no parameters and all variables were global. MBASIC did not make structured programming mandatory for programmers and it was easy to write spaghetti code. PEEKs, POKEs, and user functions: No discussion of BASICs on the 8-bit computers of the late '70s and early '80s would be complete without mentioning the importance of the PEEK and POKE functions for directly reading and writing to memory. Since these systems typically had no memory protection, this allowed a programmer to access portions of the operating system, or functions that would not otherwise be available. This also provided opportunities for user programs to hang the system (by accident, usually). For example, a CP/M programmer might use a POKE function to allow BASIC to switch the console device to the serial port, if the system BIOS supported this. For machines with real-time clocks, a set of PEEK instructions might have been used to access the time. PEEKs, POKEs, and user functions: For more complex operations, MBASIC allowed user-defined functions that could be called from a BASIC program. These were typically placed in a reserved area of memory, or POKEd into string constants, as a series of machine codes (op codes). MBASIC also provided hardware INP and OUT instructions that read and wrote directly to the 8080 hardware input/output ports. This could be used to control peripheral devices from a BASIC program if the system hardware permitted. Any MBASIC programs that made use of PEEK and POKE, and of machine code user functions, were not portable between machines without modifications. Successors to MBASIC: Besides Microsoft's BASIC-80 for CP/M, a variant of MBASIC was also available as for the ISIS-II operating system. MSX-BASIC is also a well known successor of MBASIC, featuring several extensions specific to the MSX machines. Successors to MBASIC: All the functions of CP/M MBASIC were available in the IBM PC disk-based BASICA or GWBASIC which made migration of programs from CP/M systems to PC-compatibles possible. The tokens used to represent keywords were different, so CP/M programs had to be saved in ASCII source form. Typically screen formatting escape sequences put into the CP/M version would be replaced with the cursor positioning commands found in the PC versions of BASIC, otherwise little rewriting would be needed. BASCOM: Microsoft sold a CP/M BASIC compiler (known as BASCOM) which used a similar source language to MBASIC. A program debugged under MBASIC could be compiled with BASCOM. Since program text was no longer in memory and the run-time elements of the compiler were smaller than the interpreter, more memory was available for user data. Speed of real program execution increased about 3 fold. BASCOM: Developers welcomed BASCOM as an alternative to the popular but slow and clumsy CBASIC. Unlike CBASIC, BASCOM did not need a preprocessor for MBASIC source code so could be debugged interactively. A disadvantage was Microsoft's requirement of a 9% royalty for each compiled copy of a program and $40 for hardware-software combinations. The company also reserved the right to audit developers' financial records. Because authors' typical royalty rates for software was 10-25%, InfoWorld in 1980 stated that BASCOM's additional 9% royalty rate "could make software development downright unprofitable", concluding that "Microsoft has the technical solution [to CBASIC's flaws], but not the economic one". Importance of MBASIC: MBASIC was an important tool during the era of 8-bit CP/M computers. Skilled users could write routines in MBASIC to automate tasks that in modern-day systems would be performed by powerful application program commands or scripting languages. Exchange of useful MBASIC programs was a common function of computer users' groups. Keying in long BASIC listings from a magazine article was one way of "bootstrapping" software into a new CP/M system. At least one compiler for a high-level language was written in MBASIC, and many small games and utility programs ranging from a few lines to a few thousand lines of code were written. Other uses: MBASIC is also the name of a commercial BASIC compiler for the Microchip Technology PIC microcontroller family developed by Basic Micro, Inc., unrelated to the CP/M interpreter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ANKRD35** ANKRD35: Ankyrin repeat domain 35 also known as ANKRD35 is a protein which in humans is encoded by the ANKRD35 gene. Related gene problems: TAR syndrome 1q21.1 deletion syndrome 1q21.1 duplication syndrome
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ArcGIS Pro** ArcGIS Pro: ArcGIS Pro is desktop GIS software developed by Esri, which replaces their ArcMap software generation. The product was announced as part of Esri's ArcGIS 10.3 release, ArcGIS Pro is notable in having a 64 bit architecture, combined 2-D, 3-D support, ArcGIS Online integration and Python 3 support.A major version update occurred with the release of ArcGIS Pro 3.0 in June, 2022. Several major changes include: the dropping of support for geocoders created with ArcMap 10.x and versions of ArcGIS Pro 2.9.x and earlier; project files created or modified with ArcGIS Pro 3.0 are not readable by versions 2.9.x and earlier; geodatabases created in 3.0 may not be fully compatible with prior versions; and perhaps most significantly, Parcel Fabric datasets created in prior versions must be upgraded to be fully compatible in version 3.0.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**4-Chlorobenzonitrile** 4-Chlorobenzonitrile: 4-Chlorobenzonitrile is an organic compound with the formula ClC6H4CN. It is a white solid. The compound, one of three isomers of chlorobenzonitrile, is produced industrially by ammoxidation of 4-chlorotoluene. The compound is of commercial interest as a precursor to pigments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pair programming** Pair programming: Pair programming is a software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently. While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This is intended to free the driver to focus all of their attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide. Economics: Pair programming increases the person-hours required to deliver code compared to programmers working individually. However, the resulting code has fewer defects. Along with code development time, other factors like field support costs and quality assurance also figure into the return on investment. Pair programming might theoretically offset these expenses by reducing defects in the programs.In addition to preventing mistakes as they are made, other intangible benefits may exist. For example, the courtesy of rejecting phone calls or other distractions while working together, taking fewer breaks at agreed-upon intervals, or shared breaks to return phone calls (but returning to work quickly since someone is waiting). One member of the team might have more focus and help drive or awaken the other if they lose focus, and that role might periodically change. One member might have knowledge of a topic or technique that the other does not, which might eliminate delays to find or testing a solution, or allow for a better solution, thus effectively expanding the skill set, knowledge, and experience of a programmer as compared to working alone. Each of these intangible benefits, and many more, may be challenging to accurately measure but can contribute to more efficient working hours. Design quality: A system with two programmers possesses greater potential for the generation of more diverse solutions to problems for three reasons: the programmers bring different prior experiences to the task; they may assess information relevant to the task in different ways; they stand in different relationships to the problem by virtue of their functional roles.In an attempt to share goals and plans, the programmers must overtly negotiate a shared course of action when a conflict arises between them. In doing so, they consider a larger number of ways of solving the problem than a single programmer alone might do. This significantly improves the design quality of the program as it reduces the chances of selecting a poor method. Satisfaction: In an online survey of pair programmers from 2000, 96% of programmers stated that they enjoyed work more while pair programming than programming alone. Furthermore, 95% said that they were more confident in their work when they pair programmed. However, as the survey was among self-selected pair programmers, it did not account for programmers who were forced to pair program. Learning: Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom. Many sources suggest that students show higher confidence when programming in pairs, and many learn whether it be from tips on programming language rules to overall design skills. In "promiscuous pairing", each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team. Pair programming allows programmers to examine their partner's code and provide feedback, which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities. Team-building and communication: Pair programming allows team members to share quickly, making them less likely to have agendas hidden from each other. This helps pair programmers learn to communicate more easily. "This raises the communication bandwidth and frequency within the project, increasing overall information flow within the team." Studies: There are both empirical studies and meta-analyses of pair programming. The empirical studies tend to examine the level of productivity and the quality of the code, while meta-analyses may focus on biases introduced by the process of testing and publishing. Studies: A meta-analysis found pairs typically consider more design alternatives than programmers working alone, arrive at simpler, more maintainable designs, and catch design defects earlier. However, it raised concerns that its findings may have been influenced by "signs of publication bias among published studies on pair programming". It concluded that "pair programming is not uniformly beneficial or effective".Although pair programmers may complete a task faster than a solo programmer, the total number of person-hours increases. A manager would have to balance faster completion of the work and reduced testing and debugging time against the higher cost of coding. The relative weight of these factors can vary by project and task. Studies: The benefit of pairing is greatest on tasks that the programmers do not fully understand before they begin: that is, challenging tasks that call for creativity and sophistication, and for novices as compared to experts. Pair programming could be helpful for attaining high quality and correctness on complex programming tasks, but it would also increase the development effort (cost) significantly.On simple tasks, which the pair already fully understands, pairing results in a net drop in productivity. It may reduce the code development time but also risks reducing the quality of the program. Productivity can also drop when novice–novice pairing is used without sufficient availability of a mentor to coach them.A study of programmers using AI assistance tools such as GitHub Copilot found that while some programmers conceived of AI assistance as similar to pair programming, in practice the use of such tools is very different in terms of the programmer experience, with the human programmer having to transition repeatedly between driver and navigator roles. Indicators of non-performance: There are indicators that a pair is not performing well: Disengagement may present as one of the members physically withdraws away from the keyboard, accesses email, or even falls asleep. The "Watch the Master" phenomenon can arise if one member is more experienced than the other. In this situation, the junior member may take the observer role, deferring to the senior member of the pair for the majority of coding activity. This can easily lead to disengagement. Pairing variations: Expert–expert Expert–expert pairing may seem to be the obvious choice for the highest productivity and can produce great results, but it often yields little insight into new ways to solve problems, as both parties are unlikely to question established practices. Pairing variations: Expert–novice Expert–novice pairing creates many opportunities for the expert to mentor the novice. This pairing can also introduce new ideas, as the novice is more likely to question established practices. The expert, now required to explain established practices, is also more likely to question them. However, in this pairing, an intimidated novice may passively "watch the master" and hesitate to participate meaningfully. Also, some experts may not have the patience needed to allow constructive novice participation. Pairing variations: Novice–novice Novice–novice pairing can produce results significantly better than two novices working independently, although this practice is generally discouraged because it is harder for novices to develop good habits without a proper role model. Remote pair programming: Remote pair programming, also known as virtual pair programming or distributed pair programming, is pair programming in which the two programmers are in different locations, working via a collaborative real-time editor, shared desktop, or a remote pair programming IDE plugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".Tool support could be provided by: Whole-screen sharing software Terminal multiplexers Specialized distributed editing tools Audio chat programs or VoIP software could be helpful when the screen sharing software does not provide two-way audio capability. Use of headsets keep the programmers' hands free Cloud development environments Collaborative pair programming services
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wigner D-matrix** Wigner D-matrix: The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). It was introduced in 1927 by Eugene Wigner, and plays a fundamental role in the quantum mechanical theory of angular momentum. The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The letter D stands for Darstellung, which means "representation" in German. Definition of the Wigner D-matrix: Let Jx, Jy, Jz be generators of the Lie algebra of SU(2) and SO(3). In quantum mechanics, these three operators are the components of a vector operator known as angular momentum. Examples are the angular momentum of an electron in an atom, electronic spin, and the angular momentum of a rigid rotor. In all cases, the three operators satisfy the following commutation relations, [Jx,Jy]=iJz,[Jz,Jx]=iJy,[Jy,Jz]=iJx, where i is the purely imaginary number and Planck's constant ħ has been set equal to one. The Casimir operator J2=Jx2+Jy2+Jz2 commutes with all generators of the Lie algebra. Hence, it may be diagonalized together with Jz. Definition of the Wigner D-matrix: This defines the spherical basis used here. That is, there is a complete set of kets (i.e. orthonormal basis of joint eigenvectors labelled by quantum numbers that define the eigenvalues) with J2|jm⟩=j(j+1)|jm⟩,Jz|jm⟩=m|jm⟩, where j = 0, 1/2, 1, 3/2, 2, ... for SU(2), and j = 0, 1, 2, ... for SO(3). In both cases, m = −j, −j + 1, ..., j. Definition of the Wigner D-matrix: A 3-dimensional rotation operator can be written as R(α,β,γ)=e−iαJze−iβJye−iγJz, where α, β, γ are Euler angles (characterized by the keywords: z-y-z convention, right-handed frame, right-hand screw rule, active interpretation). The Wigner D-matrix is a unitary square matrix of dimension 2j + 1 in this spherical basis with elements Dm′mj(α,β,γ)≡⟨jm′|R(α,β,γ)|jm⟩=e−im′αdm′mj(β)e−imγ, where dm′mj(β)=⟨jm′|e−iβJy|jm⟩=Dm′mj(0,β,0) is an element of the orthogonal Wigner's (small) d-matrix. That is, in this basis, Dm′mj(α,0,0)=e−im′αδm′m is diagonal, like the γ matrix factor, but unlike the above β factor. Wigner (small) d-matrix: Wigner gave the following expression: cos sin ⁡β2)m′−m+2s(j+m−s)!s!(m′−m+s)!(j−m′−s)!]. Wigner (small) d-matrix: The sum over s is over such values that the factorials are nonnegative, i.e. smin=max(0,m−m′) , smax=min(j+m,j−m′) Note: The d-matrix elements defined here are real. In the often-used z-x-z convention of Euler angles, the factor (−1)m′−m+s in this formula is replaced by (−1)sim−m′, causing half of the functions to be purely imaginary. The realness of the d-matrix elements is one of the reasons that the z-y-z convention, used in this article, is usually preferred in quantum mechanical applications. Wigner (small) d-matrix: The d-matrix elements are related to Jacobi polynomials cos ⁡β) with nonnegative a and b. Let min (j+m,j−m,j+m′,j−m′). If k={j+m:a=m′−m;λ=m′−mj−m:a=m−m′;λ=0j+m′:a=m−m′;λ=0j−m′:a=m′−m;λ=m′−m Then, with b=2j−2k−a, the relation is sin cos cos ⁡β), where 0. Properties of the Wigner D-matrix: The complex conjugate of the D-matrix satisfies a number of differential properties that can be formulated concisely by introducing the following operators with (x,y,z)=(1,2,3), cos cot sin cos sin sin cot cos sin sin ⁡β∂∂γ)J^3=−i∂∂α which have quantum mechanical meaning: they are space-fixed rigid rotor angular momentum operators. Further, cos sin sin cot cos sin sin cos cot sin ⁡γ∂∂γ)P^3=−i∂∂γ, which have quantum mechanical meaning: they are body-fixed rigid rotor angular momentum operators. The operators satisfy the commutation relations and [P1,P2]=−iP3, and the corresponding relations with the indices permuted cyclically. The Pi satisfy anomalous commutation relations (have a minus sign on the right hand side). The two sets mutually commute, [Pi,Jj]=0,i,j=1,2,3, and the total operators squared are equal, J2≡J12+J22+J32=P2≡P12+P22+P32. Their explicit form is, sin cos cot ⁡β∂∂β. The operators Ji act on the first (row) index of the D-matrix, J3Dm′mj(α,β,γ)∗=m′Dm′mj(α,β,γ)∗(J1±iJ2)Dm′mj(α,β,γ)∗=j(j+1)−m′(m′±1)Dm′±1,mj(α,β,γ)∗ The operators Pi act on the second (column) index of the D-matrix, P3Dm′mj(α,β,γ)∗=mDm′mj(α,β,γ)∗, and, because of the anomalous commutation relation the raising/lowering operators are defined with reversed signs, (P1∓iP2)Dm′mj(α,β,γ)∗=j(j+1)−m(m±1)Dm′,m±1j(α,β,γ)∗. Finally, J2Dm′mj(α,β,γ)∗=P2Dm′mj(α,β,γ)∗=j(j+1)Dm′mj(α,β,γ)∗. In other words, the rows and columns of the (complex conjugate) Wigner D-matrix span irreducible representations of the isomorphic Lie algebras generated by {Ji} and {−Pi} An important property of the Wigner D-matrix follows from the commutation of R(α,β,γ) with the time reversal operator T, ⟨jm′|R(α,β,γ)|jm⟩=⟨jm′|T†R(α,β,γ)T|jm⟩=(−1)m′−m⟨j,−m′|R(α,β,γ)|j,−m⟩∗, or Dm′mj(α,β,γ)=(−1)m′−mD−m′,−mj(α,β,γ)∗. Here, we used that T is anti-unitary (hence the complex conjugation after moving T† from ket to bra), T|jm⟩=(−1)j−m|j,−m⟩ and (−1)2j−m′−m=(−1)m′−m A further symmetry implies (−1)m′−mDmm′j(α,β,γ)=Dm′mj(γ,β,α). Orthogonality relations: The Wigner D-matrix elements Dmkj(α,β,γ) form a set of orthogonal functions of the Euler angles α,β, and γ sin ⁡β∫02πdγDm′k′j′(α,β,γ)∗Dmkj(α,β,γ)=8π22j+1δm′mδk′kδj′j. This is a special case of the Schur orthogonality relations. Crucially, by the Peter–Weyl theorem, they further form a complete set. The fact that Dmkj(α,β,γ) are matrix elements of a unitary transformation from one spherical basis |lm⟩ to another R(α,β,γ)|lm⟩ is represented by the relations: ∑kDm′kj(α,β,γ)∗Dmkj(α,β,γ)=δm,m′, ∑kDkm′j(α,β,γ)∗Dkmj(α,β,γ)=δm,m′. The group characters for SU(2) only depend on the rotation angle β, being class functions, so, then, independent of the axes of rotation, sin sin ⁡(β2), and consequently satisfy simpler orthogonality relations, through the Haar measure of the group, sin 2⁡(β2)χj(β)χj′(β)=δj′j. The completeness relation (worked out in the same reference, (3.95)) is ∑jχj(β)χj(β′)=δ(β−β′), whence, for β′=0, ∑jχj(β)(2j+1)=δ(β). Kronecker product of Wigner D-matrices, Clebsch-Gordan series: The set of Kronecker product matrices Dj(α,β,γ)⊗Dj′(α,β,γ) forms a reducible matrix representation of the groups SO(3) and SU(2). Reduction into irreducible components is by the following equation: Dmkj(α,β,γ)Dm′k′j′(α,β,γ)=∑J=|j−j′|j+j′⟨jmj′m′|J(m+m′)⟩⟨jkj′k′|J(k+k′)⟩D(m+m′)(k+k′)J(α,β,γ) The symbol ⟨j1m1j2m2|j3m3⟩ is a Clebsch–Gordan coefficient. Relation to spherical harmonics and Legendre polynomials: For integer values of l , the D-matrix elements with second index equal to zero are proportional to spherical harmonics and associated Legendre polynomials, normalized to unity and with Condon and Shortley phase convention: cos ⁡β)e−imα. This implies the following relationship for the d-matrix: cos ⁡β). A rotation of spherical harmonics ⟨θ,ϕ|ℓm′⟩ then is effectively a composition of two rotations, ∑m′=−ℓℓYℓm′(θ,ϕ)Dm′mℓ(α,β,γ). When both indices are set to zero, the Wigner D-matrix elements are given by ordinary Legendre polynomials: cos ⁡β). In the present convention of Euler angles, α is a longitudinal angle and β is a colatitudinal angle (spherical polar angles in the physical definition of such angles). This is one of the reasons that the z-y-z convention is used frequently in molecular physics. From the time-reversal property of the Wigner D-matrix follows immediately (Yℓm)∗=(−1)mYℓ−m. There exists a more general relationship to the spin-weighted spherical harmonics: Dmsℓ(α,β,−γ)=(−1)s4π2ℓ+1sYℓm(β,α)eisγ. Connection with transition probability under rotations: The absolute square of an element of the D-matrix, Fmm′(β)=|Dmm′j(α,β,γ)|2, gives the probability that a system with spin j prepared in a state with spin projection m along some direction will be measured to have a spin projection m′ along a second direction at an angle β to the first direction. The set of quantities Fmm′ itself forms a real symmetric matrix, that depends only on the Euler angle β , as indicated. Connection with transition probability under rotations: Remarkably, the eigenvalue problem for the F matrix can be solved completely: cos ⁡β)fℓj(m)(ℓ=0,1,…,2j). Here, the eigenvector, fℓj(m) , is a scaled and shifted discrete Chebyshev polynomial, and the corresponding eigenvalue, cos ⁡β) , is the Legendre polynomial. Relation to Bessel functions: In the limit when ℓ≫m,m′ we have Dmm′ℓ(α,β,γ)≈e−imα−im′γJm−m′(ℓβ) where Jm−m′(ℓβ) is the Bessel function and ℓβ is finite. List of d-matrix elements: Using sign convention of Wigner, et al. the d-matrix elements dm′mj(θ) for j = 1/2, 1, 3/2, and 2 are given below. List of d-matrix elements: for j = 1/2 cos sin ⁡θ2 for j = 1 cos sin cos cos ⁡θ for j = 3/2 cos cos cos sin cos cos cos sin cos cos cos sin ⁡θ2 for j = 2 cos sin cos sin sin cos cos cos cos sin cos cos cos 2⁡θ−1) Wigner d-matrix elements with swapped lower indices are found with the relation: dm′,mj=(−1)m−m′dm,m′j=d−m,−m′j.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Da li ste pametniji od đaka petaka?** Da li ste pametniji od đaka petaka?: Da li ste pametniji od đaka petaka? was a Serbian game show broadcast by Fox televizija. It is a licensed version of the global Are You Smarter Than a 5th Grader? franchise. Da li ste pametniji od đaka petaka?: The show was aired weekly and it lasted only for one season. The show was hosted by Voja Nedeljković. The top prize was RSD 5,555,555 (around €70,000). Though broadcast on a network seen only in Serbia, and produced in Serbian, the show was actually taped in Sofia, Bulgaria in the same studio as the show's Bulgarian version Това го знае всяко хлапе! that aired on bTV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Central Weather Bureau seismic intensity scale** Central Weather Bureau seismic intensity scale: The Central Weather Bureau seismic intensity scale (Chinese: 交通部中央氣象局地震震度分級) is a seismic intensity scale used in Taiwan. It was established by the Central Weather Bureau.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Imaginary Thirteen** Imaginary Thirteen: Imaginary Thirteen is a solitaire card game which is played with two decks of playing cards. Its gameplay makes it a two-deck version of Calculation and its name is taken from the fact that when a sum is over thirteen, thirteen (from out of nowhere) is subtracted to get the value of the next card, with spot cards worth their face value, jacks eleven, queens twelve, and kings thirteen. Rules: To set up the tableau, an ace, a deuce (two), a trey (three), a four, a five, a six, a seven, and an eight are removed and placed in a row. These cards are markers which remind the player on the number to be added to the top card of the foundations to be placed under them. Rules: Then eight cards are placed under the marker cards, each supposedly double the value of the card above it. Therefore, a deuce is placed below the ace, a four below the deuce, a six below the three, an eight below the four, a ten below the five, and a queen below the six. Since 14 and 14 13 =1 , an ace is placed below the seven. Also, 16 and 16 13 =3 , which puts a trey under the eight. The illustration below shows the arrangement. Rules: These eight new cards will be the bases for the eight foundations, each would be built up regardless of suit to kings in intervals each indicated by the marker cards, i.e. the foundation under the ace is built up by ones, the foundation under the deuce built up by twos, the foundation under the trey built up by threes, and so on. Whenever the total goes over thirteen, it is deducted by thirteen to get the value of the next card. The table below illustrates the sequences in the building the foundations below. Rules: The gameplay is composed of taking a card from the stock. If it can be played on the foundations, it is placed there in the appropriate foundation. Otherwise, it is placed in one of four waste piles. The top card of each waste pile is available for play. At any time, after a card is placed in the foundations, the player would check the top cards of the waste piles to see if any more plays are possible. This process is repeated until the stock is depleted. Rules: The game is won when all cards have been played to the foundations after the stock runs out. The game is lost, however, if there are no more plays after the stock runs out.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WTX (form factor)** WTX (form factor): WTX (for Workstation Technology Extended) was a motherboard form factor specification introduced by Intel at the IDF in September 1998, for its use at high-end, multiprocessor, multiple-hard-disk servers and workstations. The specification had support from major OEMs (Compaq, Dell, Fujitsu, Gateway, Hewlett-Packard, IBM, Intergraph, NEC, Siemens Nixdorf, and UMAX) and motherboard manufacturers (Acer, Asus, Supermicro and Tyan) and was updated (1.1) in February 1999. As of 2008, the specification has been discontinued and the URL www.wtx.org no longer hosts a website and has not been owned by Intel since at least 2004. WTX (form factor): This form factor was geared specifically towards the needs of high-end systems, and included specifications for a WTX power supply unit (PSU) using two WTX-specific 24-pin and 22-pin Molex connectors. The WTX specification was created to standardize a new motherboard and chassis form factor, fix the relative processor location, and allow for high volume airflow through a portion of the chassis where the processors are positioned. This allowed for standard form factor motherboards and chassis to be used to integrate processors with more demanding thermal management requirements. Bigger than ATX, maximum WTX motherboard size was 14 × 16.75 in (356 × 425 mm). This was intended to provide more room in order to accommodate higher numbers of integrated components. WTX computer cases were backwards compatible with ATX motherboards (but not vice versa), and sometimes came equipped with ATX power supplies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perseverance (solitaire)** Perseverance (solitaire): Perseverance is a solitaire card game played with a deck of 52 playing cards. The reason for the name is not known, but likely originates in the fact that perseverance is necessary to succeed. Rules: First, the four aces are taken out of the deck. These form the four foundations. Then the rest are shuffled and dealt into twelve piles of four cards each. One can distribute one card at a time for each pile or deal four cards at a time to form a pile. The top cards of each pile are available for play to the foundations or on the tableau piles. The foundations are built up by suit, with the cards on the tableau are built down, also by suit. One card can be moved at a time. However, the player is allowed to move a sequence of cards as a unit to another pile with an appropriate card (e.g. 6-5-4-3♠ can be placed on the 7♠). Rules: When all possible moves are made (or the player has done all the possible moves one can make), the piles are picked up in reverse order. For example, the twelfth pile is placed over the eleventh pile, and this new pile is placed on the tenth pile, and so on. Then, without shuffling, the cards are dealt to as many piles of four as the remaining cards will allow. To ensure that the order of the cards is not disturbed for the most part, it is suggested that the cards are dealt four at time. This can be done only twice. Rules: The game is won when all cards are built onto the foundations up to Kings. Variations: Cruel is a popular solitaire game based on Perseverance. Perseverance is also closely related to Bisley. Other sources: Coops, Helen L. 100 Games of Solitaire Bonaventure, George A. Two-Pack Games of Solitaire Dick, William Brisbane. Dick's Games of Patience Moyse Jr, Alphonse. 150 Ways to play Solitaire
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jitterlyzer** Jitterlyzer: The FS5000 Jitterlyzer performs physical layer serial bus jitter evaluation. It can inject controlled jitter and measure the characteristics of incoming jitter. When teamed with a logic analyzer or protocol analyzer, it can correlate these measurements with protocol analysis. Physical-layer tests can be performed while the system under test is processing live bus traffic. Jitter measurements: The FS5000 measures jitter in two categories: Timing There are four different timing measurements: Bathtub Plot - The bathtub curve can provide tremendous insight into the BER performance of a link under test. A bathtub curve is obtained by drawing a horizontal line across the waveform under test. The probability distribution function for signal transitions (zero crossings) from a high voltage to a low voltage or a low voltage to a high voltage is then computed. The bathtub curve is useful because it can provide a lot of insight into the behavior of a system. Apart from estimating BER, it also provides an indication for the amount of margin that is in the system. When coupled with protocol testing with the Jitterlyzer, a high margin enables engineers to quickly rule out the physical layer as a potential cause of certain protocol errors. Jitter measurements: Statistics - The Jitterlyzer’s measurement routines uncover total jitter and BER directly (without requiring mathematical extrapolation). This routine for random jitter (RJ) and deterministic jitter (DJ) separation is included for completeness. Also a number is provided for RJ and DJ on real-life traffic. Bus View - Shows channel to channel skew of four channels simultaneously. Jitter Histogram - This routine is performed on real-life traffic. It selects the zero-crossing voltage in the incoming data. It then counts the number of transitions of the high-speed serial signal as a function of phase positions. A histogram of number of hits versus delay is then plotted. Jitter measurements: Eye There are three different eye measurements Eye diagram - This routine is performed on real-life traffic. It provides much more information than just a vertical eye opening or a horizontal eye opening. Instead, it provides a first indication of parameters such as dispersion in the signal path, rise-time issues, etc. Click on the link and you will see an example of an eye diagram that is obtained using the Jitterlyzer. As can be seen, an indication of noise throughout the whole eye is made, as opposed to, for example, only at the center. Jitter measurements: Oscilloscope - This measurement allows the user to see an oscilloscope trace in persistent mode, allowing direct inspection of the eye. Color represents the frequency with which the measured trace passes through each point, with red representing the most frequent, and black representing the least. Note that frequency is normalized to a maximum value of 1. Voltage Histogram - This measurement is similar to the jitter histogram except that it is done in the vertical domain. It shows the amount of voltage noise that exists on the high-speed serial signal and is similar in principle to the vertical eye height. Jitter generation: Data pattern LIVE traffic Preprogrammed Compliance patterns Jitter profile Frequency range 19.07 kHz - 19.99 MHz Amplitude range 40ps - 1200ps Differential swing 400-1600mv
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dressing overall** Dressing overall: Dressing overall consists of stringing international maritime signal flags on a ship from stemhead to masthead, from masthead to masthead (if the vessel has more than one mast) and then down to the taffrail. It is a sign of celebration, and is done for celebratory occasions, anniversaries and events, whether national, local or personal. Dressing overall: Practice varies from country to country as to the order in which the signal flags are placed on the "dressing lines": in some places a specific order is laid down, in others there is no such provision; either way, the intention is to produce a random succession of flags (i.e. not conveying any words or other messages), with the numerical and other pennants spaced equally and regularly along the line. Custom and regulations require that national or other flags not be mixed in with the signal flags when dressing a ship overall. Dressing overall: When a ship is properly dressed overall in harbor, ensigns (in addition to the one flown in the usual position at the stern) should fly at each masthead, unless displaced by another flag, e.g., that of a flag officer. A ship underway does not array herself with signal flags, but the masthead ensign(s) would still signify that she is dressed while underway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Germanium dichloride dioxane** Germanium dichloride dioxane: Germanium dichloride dioxane is a chemical compound with the formula GeCl2(C4H8O2), where C4H8O2 is 1,4-dioxane. It is a white solid. The compound is notable as a source of Ge(II), which contrasts with the pervasiveness of Ge(IV) compounds. This dioxane complex represents a well-behaved form of germanium dichloride. Synthesis and structure: It is prepared by reduction of a dioxane solution of germanium tetrachloride with tributyltin hydride: GeCl4 + 2 Bu3SnH + C4H8O2 → GeCl2(O2C4H8) + 2 Bu3SnCl + H2Hydrosilanes have also been used as reductants.The complex has a polymeric structure. Germanium adopts an SF4-like shape with cis Cl ligands (Cl-Ge-Cl angle = 94.4°) and axial positions occupied by oxygen provided by a bridging dioxane. The Ge-O and Ge-Cl distances are 2.40 and 2.277 A, respectively. Reactions: The complex is used in the preparation of organogermanium compounds. In organic synthesis, the complex is used as a Lewis acid with reducing properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Off-axis optical system** Off-axis optical system: An off-axis optical system is an optical system in which the optical axis of the aperture is not coincident with the mechanical center of the aperture. The principal applications of off-axis optical systems are to avoid obstruction of the primary aperture by secondary optical elements, instrument packages, or sensors, and to provide ready access to instrument packages or sensors at the focus. The engineering tradeoff of an off-axis optical system is an increase in image aberrations. Off-axis optical system: There are various theoretical models for aberration in off-axis optical systems. This involves various techniques including different types of equations for ray-tracing, and a goal can be optimizing the design.An example of an off-axis optical system is a three mirror design as optics for a hyperspectral imager.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Prenatal perception** Prenatal perception: Prenatal perception is the study of the extent of somatosensory and other types of perception during pregnancy. In practical terms, this means the study of fetuses; none of the accepted indicators of perception are present in embryos. Studies in the field inform the abortion debate, along with certain related pieces of legislation in countries affected by that debate. As of 2022, there is no scientific consensus on whether a fetus can feel pain. Prenatal hearing: Numerous studies have found evidence indicating a fetus's ability to respond to auditory stimuli. The earliest fetal response to a sound stimulus has been observed at 16 weeks' gestational age, while the auditory system is fully functional at 25–29 weeks' gestation. At 33–41 weeks' gestation, the fetus is able to distinguish its mother's voice from others. Prenatal pain: The hypothesis that human fetuses are capable of perceiving pain in the first trimester has little support, although fetuses at 14 weeks may respond to touch. A multidisciplinary systematic review from 2005 found limited evidence that thalamocortical pathways begin to function "around 29 to 30 weeks' gestational age", only after which a fetus is capable of feeling pain.In March 2010, the Royal College of Obstetricians and Gynecologists submitted a report, concluding that "Current research shows that the sensory structures are not developed or specialized enough to respond to pain in a fetus of less than 24 weeks",: 22  The neural regions and pathways that are responsible for pain experience remain under debate but it is generally accepted that pain from physical trauma requires an intact pathway from the periphery, through the spinal cord, into the thalamus and on to regions of the cerebral cortex including the primary sensory cortex (S1), the insular cortex and the anterior cingulated cortex. Fetal pain is not possible before these necessary neural pathways and structures have developed.: 3  The report specifically identified the anterior cingulate as the area of the cerebral cortex responsible for pain processing. The anterior cingulate is part of the cerebral cortex, which begins to develop in the fetus at week 26. A co-author of that report revisited the evidence in 2020, specifically the functionality of the thalamic projections into the cortical subplate, and posited "an immediate and unreflective pain experience...from as early as 12 weeks."There is a consensus among developmental neurobiologists that the establishment of thalamocortical connections (at weeks 22–34, reliably at 29) is a critical event with regard to fetal perception of pain, as they allow peripheral sensory information to arrive at the cortex.Electroencephalography indicates that the capacity for functional pain perception in premature infants does not exist before 29 or 30 weeks; a 2005 meta-analysis states that withdrawal reflexes and changes in heart rates and hormone levels in response to invasive procedures are reflexes that do not indicate fetal pain.Several lines of evidence suggest that a fetus does not awaken during its time in the womb. Much of the literature on fetal pain simply extrapolates from findings and research on premature babies. The presence of such chemicals as adenosine, pregnanolone, and prostaglandin-D2 in both human and animal fetuses, indicate that the fetus is both sedated and anesthetized when in the womb. These chemicals are oxidized with the newborn's first few breaths and washed out of the tissues, increasing consciousness. If the fetus is asleep throughout gestation then the possibility of fetal pain is greatly minimized, although some studies found that the adenosine levels in third-trimester fetuses are only slightly higher than those in adults' blood. Prenatal pain: Fetal anesthesia Direct fetal analgesia is used in only a minority of prenatal surgeries.Some caution that unnecessary use of fetal anesthetic may pose potential health risks to the mother. "In the context of abortion, fetal analgesia would be used solely for beneficence toward the fetus, assuming fetal pain exists. This interest must be considered in concert with maternal safety and fetal effectiveness of any proposed anesthetic or analgesic technique. For instance, general anesthesia increases abortion morbidity and mortality for women and substantially increases the cost of abortion. Although placental transfer of many opioids and sedative-hypnotics has been determined, the maternal dose required for fetal analgesia is unknown, as is the safety for women at such doses. Given the maternal risk involved and the lack of evidence of any potential benefit to the fetus, administering fetal anesthesia for abortion is not recommended.Fetal pain legislation may make abortions harder to obtain, because abortion clinics lack the equipment and expertise to supply fetal anesthesia. Currently, anesthesia is administered directly to fetuses only while they are undergoing surgery.Doctors for a Woman's Choice on Abortion pointed out that the majority of surgical abortions in Britain are already performed under general anesthesia, which also affects the fetus. In a letter to the British Medical Journal in April 1997, they deemed the discussion "unhelpful to women and to the scientific debate" despite a report in the British Medical Journal that "the theoretical possibility that the fetus may feel pain (albeit much earlier than most embryologists and physiologists consider likely) with the procedure of legal abortion". Yet if mothers' general anesthesia were enough to anesthetise the fetus, all fetuses would be born sleepy after a cesarean section performed in general anesthesia, which is not the case. Dr. Carlo V. Bellieni also agrees that the anesthesia that women receive for fetal surgery is not sufficient to anesthetize the fetus. United States legislation: Federal legislation In 1985, questions about fetal pain were raised during congressional hearings concerning The Silent Scream.In 2013 during the 113th Congress, Representative Trent Franks introduced a bill called the "Pain-Capable Unborn Child Protection Act" (H.R. 1797). It passed in the House on June 18, 2013, and was received in the U.S. Senate, read twice, and referred to the Judiciary Committee.In 2004 during the 108th Congress, Senator Sam Brownback introduced a bill called the "Unborn Child Pain Awareness Act" for the stated purpose of "ensur[ing] that women seeking an abortion are fully informed regarding the pain experienced by their unborn child", which was read twice and referred to committee. United States legislation: State legislation Subsequently, 25 states have examined similar legislation related to fetal pain and/or fetal anesthesia, and in 2010 Nebraska banned abortions after 20 weeks on the basis of fetal pain. Eight states – Arkansas, Georgia, Louisiana, Minnesota, Oklahoma, Alaska, South Dakota, and Texas – have passed laws which introduced information on fetal pain in their state-issued abortion-counseling literature, which one opponent of these laws, the Guttmacher Institute founded by Planned Parenthood, has called "generally irrelevant" and not in line "with the current medical literature". Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania, said laws such as these "reduce ... the process of informed consent to the reading of a fixed script created and mandated by politicians not doctors."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylenecyclopropene** Methylenecyclopropene: 3-Methylenecyclopropene, also called methylenecyclopropene or triafulvene, is a hydrocarbon with chemical formula C4H4. It is a colourless gas that polymerizes readily as a liquid or in solution but is stable as a gas. This highly strained and reactive molecule was synthesized and characterized for the first time in 1984, and has been the subject of considerable experimental and theoretical interest. It is an example of a cross-conjugated alkene, being composed of cyclopropene with an exocyclic double bond attached. Description: Methylenecyclopropene is the smallest of the fulvenes (a family of unstable, cyclic molecules, conjugated transversally with an odd number of carbon atoms in the ring). The structure of methylenecyclopropene has two interacting double bonds, which represents the simplest transversally conjugated π-bonding system. It is fundamentally not an alternant hydrocarbon. The value of its dipole moment (which is around four times that of pentafulvene) can be calculated by the Hückel method (HMO). Its study has involved the use of isotopic isomers. Reactivity: Most fulvenes are typically non-aromatic in nature (based on spectroscopic data), having properties closer to alkenes. In the case of tria- and pentafulvene, the possibility of dipole forms of resonance suggests an aromatic character to the cyclic structure; furthermore, as opposed to pentafulvene, one of the triafulvene resonance structures has a negative charge on the methylidene carbon. Similarly to heptafulvene (fulvene containing a 7-atom cyclic ring), triafulvene polymerizes easily at −20 °C and is stabilized by electron-accepting groups bonded to the methylidene carbon atom.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deacetylisoipecoside synthase** Deacetylisoipecoside synthase: The enzyme deacetylisoipecoside synthase (EC 4.3.3.3) catalyzes the chemical reaction deacetylisoipecoside + H2O ⇌ dopamine + secologaninThis enzyme belongs to the family of lyases, specifically amine lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is deacetylisoipecoside dopamine-lyase (secologanin-forming). It is also called deacetylisoipecoside dopamine-lyase. It participates in indole and ipecac alkaloid biosynthesis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peroxydicarbonate** Peroxydicarbonate: In chemistry, peroxydicarbonate (sometimes peroxodicarbonate) is a divalent anion with the chemical formula C2O2−6. It is one of the oxocarbon anions, which consist solely of carbon and oxygen. Its molecular structure can be viewed as two carbonate anions joined so as to form a peroxide bridge –O–O–. Peroxydicarbonate: The anion is formed, together with peroxocarbonate CO2−4, at the negative electrode during electrolysis of molten lithium carbonate. The anion can also be obtained by electrolysis of a saturated solution of rubidium carbonate in water.In addition, the peroxodicarbonate anion can be obtained by electrosynthesis on boron doped diamond (BDD) during water oxidation. The formal oxidation of two carbonate ions takes place at the anode. Due to the high oxidation potential of the peroxodicarbonate anion, a high anodic overpotential is necessary. This is even more important if hydroxyl radicals are involved in the formation process. Recent publications show that a concentration of 282 mmol/L of peroxodicarbonate can be reached in an undivided cell with sodium carbonate as starting material at current densities of 720 mA/cm². The described process is suitable for the pilot scale production of sodium peroxodicarbonate. Peroxydicarbonate: Potassium peroxydicarbonate K2C2O6 was obtained by Constam and von Hansen in 1895; its crystal structure was determined only in 2002. It too can be obtained by electrolysis of a saturated potassium carbonate solution at −20 °C. It is a light blue crystalline solid that decomposes at 141 °C, releasing oxygen and carbon dioxide, and decomposes slowly at lower temperatures.Rubidium peroxodicarbonate is a light blue crystalline solid that decomposes at 424 K (151 °C). Its structure was published in 2003. In both salts, each of the two carbonate units is planar. In the rubidium salt the whole molecule is planar, whereas in the potassium salt the two units lie on different and nearly perpendicular planes, both of which contain the O–O bond.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gain-field encoding** Gain-field encoding: Gain field encoding is a hypothesis about the internal storage and processing of limb motion in the brain. In the motor areas of the brain, there are neurons which collectively have the ability to store information regarding both limb positioning and velocity in relation to both the body (intrinsic) and the individual's external environment (extrinsic). The input from these neurons is taken multiplicatively, forming what is referred to as a gain field. The gain field works as a collection of internal models off of which the body can base its movements. The process of encoding and recalling these models is the basis of muscle memory. Physiology: Neuron involved in gain field encoding work multiplicatively, taking the input from several together to form the gain field. It is this process that allows the complexity of motor control. Instead of simply encoding the motion of the limb in which a specific motion is desired, the multiplicative nature of the gain field ensures that the positioning of the rest of the body is taken in to consideration. This process allows for motor coordination of flexible bimanual actions as opposed to restricting the individual to unimanual motion. For example, when considering the movement of both arms, the body calls upon gain field models for each arm in order to compensate for the mechanical interactions created by both. Physiology: Location Most gain field activity is based in the premotor cortex found in the frontal lobe anterior to the primary motor cortex, however it receives input from a variety of locations in the brain. These incoming signals provide frame of reference information through the individual's senses. Further evidence suggests that the cerebellum and posterior parietal cortex (PPC) also play major functional roles in gain field encoding. The intrinsic and extrinsic properties of the gain field can be shown as products of the PPC. In Brodmann area 7 of the PPC, the positioning of objects with respect to the eyes is represented completely extrinsically with no input from the positioning of the body involved. This opposes the case of other parts of the PPC such as Brodmann area 5 which represents objects in relation to body defined coordinates. Due to the extrinsic and intrinsic properties of motor functioning, it is speculated that these types of signals are both taken multiplicatively to form the gain field. With input from each area, a three-dimensional representation of the objects in space can be arranged for use by the rest of the motor system. Physiology: Unsurprisingly lesions in the parietal cortex lead to deficiencies in an individual's spatial movements and coordination and, in some cases, hemineglect. These effects were widely variable from person to person and depending on the location of the lesion further hinting at the complicated nature of gain modulated neurons. Physiology: Gain Modulation One of the key components of gain-field encoding is the variability in the response amplitude of the action potentials from neurons. This variability, when independent of change in response selectivity, is called gain modulation. Gain Modulation takes place in many cortical areas and is believed to be a common mechanism of neuronal computation. It allows for the combination of different sensory and cognitive information. For example, neurons implicated in processing a part of the visual field see a gain in the response amplitude due to shifting focus to that part of the field of vision. Therefore, neurons that are gain modulated can represent multiple types of information. The multi-modal nature of these neurons makes them ideal for specific types of computations, mainly coordinate transformations. This creates the ability to think spatially, the main contributor to physical coordination. Physiology: Encoding Process The encoding of the neurons involved in the motor gain field follow the same gain modulation principles as most of the neurons within the brain. That is to say, when gain is increased, the connections between the neurons firing increase in strength leading to further gain if the neurons continue to receive stimulation. This observation is why repetition of a particular set of motions is what leads to muscle memory. Physiology: Coordinate Manipulation One of the main results of gain-field encoding is the cognitive ability to manipulate different coordinate planes that are dealt with daily and adjust limb muscle movements accordingly. A good example of this is moving a pointer across a computer screen with a mouse. Depending on the relative location of the user's head to the computer screen as well as the angle at which the screen is being observed, the user's perspective of the screen will be very different. A mentally mapped grid of the screen appears much larger when the user is closer to the screen as opposed to further away, and it is the brain's ability to keep a consistent mental representation that gives people the ability to function under such dynamic conditions. Mathematical Representation: The equation for the firing rate of a gain modulated neuron is a combination of the two types of information being transmitted to the neuron: r=f(x)g(y) where r is the rate of fire, f(x) is a function of one type of information input and g(y) is another. For example, neural activity for the interaction between gaze direction and retinal image location is almost exactly multiplicative, where x represents the location of a stimulus in retinal coordinates and y represents gaze angle. The primary process by which this interaction can take place is speculated to be recurrent neural networks where neural connections form a directed cycle. Recurrent circuitry is abundant in cortical networks and reportedly plays a role in sustaining signals, signal amplification, and response selectivity. Evidence: Early hypotheses of gain field encoding suggested that the gain field works as a model for motion additively. This would mean that if two limbs needed to move, models for each would be called separately but at the same time. However, more recent studies in which more complex motor movements are observed have found that the gain field is created multiplicatively in order to allow the body to adapt to the constantly changing frames of reference experienced in everyday life. Evidence: This multiplicative property is an effect of recurrent neural circuitry. A target neuron that takes only two types of direct input can only combine them additively. However mathematical models show that when also receiving recursive input from neighboring neurons, the resulting transformation to the target neurons firing rate is multiplicative. In this model, neurons with overlapping receptive fields excite each other, multiplying the strength. Likewise, neurons with non-overlapping receptive fields are inhibitory. The result is a response curve that is a scaled representation of the simple additive model. Evidence: Observation of human developmental patterns also lend evidence toward this theory of gain-field encoding and gain modulation. Since arm movements are based on both intrinsic and extrinsic models, in order to build these connections one has to learn by self-generating movements and watching. By moving the arms to different parts of space and following with the eyes, the neurons form connections based on mechanical body movements as well as their positioning in an external space. Ideally this is done from every possible gaze angle and position available. This provides your brain with the proper translations by aligning the retinal (extrinsic) and body-centered (intrinsic) representations of space. It is not surprising that before babies develop motor control of their limbs, they tend to flail and watch their own limbs move.A similar effect is found when people track moving objects with their eyes. The changing retinal image is referenced with the muscle movements of the eye resulting in the same type of retinal/body-centered alignment. This is one more process that helps the brain properly encode the relationships needed to deal with our changing perception, and also serves as a verification that the proper physical movements are being made. Evidence: A contrary hypothesis to gain field encoding involved implicating the neurons of the primary motor cortex (M1) in dynamic muscle movement. An investigation into area M1 shows that when an individual is asked to rotate an object, activation of the neurons in M1 thought to be controlling the motion happened instantaneously with muscle activation. This provides evidence for preliminary steps from higher motor areas communicating with area M1 by means of gain modulation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Appendix cancer** Appendix cancer: Appendix cancer are very rare cancers of the vermiform appendix.Gastrointestinal stromal tumors are rare tumors with malignant potential. Primary lymphomas can occur in the appendix. Breast cancer, colon cancer, and tumors of the female genital tract may metastasize to the appendix. Diagnosis: Carcinoid tumors are the most common tumors of the appendix. Other common forms are mucinous adenocarcinomas, adenocarcinoma not otherwise specified (NOS), and signet ring cell adenocarcinoma listed from highest to lowest incidence. Diagnosis: Carcinoid A carcinoid is a neuroendocrine tumor (NET) of the intestines. Incidence rates among carcinoids occur at about .15 per 100,000 per year. This subgroup makes up a large amount of neoplasias both malignant and benign. Almost 3 out of 4 of these tumors are associated with the region at the end of the appendix, and tend to be diagnosed in the 4th to 5th decades in life. Both women and Caucasian individuals show a minor prevalence regarding Neuroendocrine tumor diagnosis without an explanation. Prognosis of 5 year survival rates of carcinoids averages between 70 and 80% for typical cases. Advanced cases for 5 year survival range from 12 to 28%. Diagnosis: Mucinous neoplasm Mucinous cystadenoma is an obsolete term for appendiceal mucinous neoplasm. Treatment: Small carcinoids (<2 cm) without features of malignancy may be treated by appendectomy if complete removal is possible. Other carcinoids and adenocarcinomas may require right hemicolectomy. Note: the term "carcinoids" is outdated: these tumors are now more accurately called "neuroendocrine tumors."Pseudomyxoma peritonei treatment includes cytoreductive surgery which includes the removal of visible tumor and affected essential organs within the abdomen and pelvis. The peritoneal cavity is infused with heated chemotherapy known as HIPEC in an attempt to eradicate residual disease. The surgery may or may not be preceded or followed with intravenous chemotherapy or HIPEC. Epidemiology: A study of primary malignancies in the United States found a rate of 0.12 cases per 1,000,000 population per year. Carcinoids that were not identified as malignant were not included in this data. Carcinoid is found in roughly 1 in 300-400 appendectomies for acute appendicitis.In a systematic literature review where 4765 appendiceal cancer patients were identified, the incidence of appendiceal cancer was shown to have increased regardless of the type of tumor, age, sex, and stage of appendiceal cancer. Roughly 75% of appendiceal cases listed in the review had some form of metastases occurring. No observed trends have been noticed as to why this increase is occurring. One theory proposed is the increased use of computed tomography imaging in emergency departments since the early 1990s allowing for detection to occur before a surgery may be performed. Notable cases: Actress Audrey Hepburn was diagnosed with appendiceal cancer, and died of the disease in 1993. In 2007, ESPN sportscast anchor Stuart Scott was diagnosed with appendiceal cancer and died of the disease in 2015. Serbian musician Vlada Divljan was diagnosed with the cancer in 2012, and died of subsequent complications in 2015. In April 2023, Wrexham fan Jay Fear, suffering from terminal appendix-cancer, asked to meet the club's new co-owner, Hollywood star Ryan Reynolds, and within a few days the actor met Fear and his family. Fear died on 26 May 2023.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enhanced TV Binary Interchange Format** Enhanced TV Binary Interchange Format: Enhanced TV Binary Interchange Format (EBIF) is a multimedia content format defined by a specification developed under the OpenCable project of CableLabs (Cable Television Laboratories, Inc.). The primary purpose of the EBIF content format is to represent an optimized collection of widget and byte code specifications that define one or more multimedia pages, similar to web pages, but specialized for use within an enhanced television or interactive television system. Enhanced TV Binary Interchange Format: An EBIF resource (file), i.e., a sequence of bytes that conforms to the EBIF content format, forms the primary information contained in an ETV Application. An ETV User Agent acquires, decodes, presents (widgets), and executes (actions) contained in an EBIF resource in order to present a multimedia page to an end-user. Other types of more specialized EBIF resources play auxiliary roles to this principal role of encoding viewable and interactive pages. Common Resource Format: An EBIF resource consists of the following components: Resource Header Optional Common Section Optional Platform Section 1...NA common or platform section of an EBIF resource consists of the following constructs: Section Header Table Directory Table 0...N-1 Optional Heap Tables: The following table types are defined for use with EBIF: Action Generic Data Metadata Palette Platform Directory Reference Resource Locator Trigger WidgetIn addition to the above, an EBIF resource may include one or more private use tables that may be interpreted or used by specific user agents. Widgets: The following types of widgets are defined for use with EBIF: Button Collection Container Form Hidden Hot Spot Image Multi-Line Text Page Private Use Radio Radio Group Container Rectangle Selector Text Text Input Timer Video Actions: In an EBIF resource, programmatic (procedural) information takes the form of byte code, where each operation and its (optional) operands is referred to as an action. Actions are organized into sequences by means of one or more action tables where each entry points at (1) an encoded action and (2) the action table index of the next action to execute after the current action's execution is completed. An action sequence terminates when the next action table index is a special value (0xFFFF) or in the case of certain flow of control actions. Action sequences effectively represent one or more traditional code blocks with potential internal looping behavior. Actions: Action sequences are executed as a result of firing certain predefined events, such as a page load event, a key press event, a click event, etc. As such, all programmatic execution takes place in the context of event handlers, whose execution is serialized by an ETV User Agent. Actions: The following categories of actions are defined by EBIF: Flow of Control Actions Predicate Actions Variable Store Actions Arithmetic Actions Boolean Logic Actions Mathematic Actions String Actions Array Actions Application and Page Actions Widget Actions Table Actions Miscellaneous Actions Memory Model The action memory model is based on a variable store, and does not make use of registers or a stack. With the exception of one predefined, internal result value variable, all variables are preallocated (and typed) at compilation time. These variables are represented in the form of a table referred to as an augmented reference table, where the content of the table is initialized at compilation time, then stored and mutated at runtime by an ETV User Agent. Actions: Execution Model The action execution model is based on the decoding and processing of action sequences that serve as event handlers. Execution of action sequences are serialized through the sequential dispatching of events to event handlers, completing the execution of an action sequence functioning as an event handler before executing any other applicable event handlers (for that event) and before processing any other enqueued event.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perennial crop** Perennial crop: Perennial crops are crops that – unlike annual crops – don't need to be replanted each year. After harvest, they automatically grow back. Many fruit and nut crops are naturally perennial, however there is also a growing movement to create perennial alternatives to annual crops. From the 1920s to the 1950s, researchers in the former Soviet Union attempted to perennialize annual wheats by crossing them with perennial relatives such as intermediate wheatgrass. Interest waned when the crosses repeatedly resulted in sterile offspring, and seed yield decreased significantly. The next major time the project of perennializing grain was picked up was a wheat hybrid developed by the Montana Agricultural Experiment Station in 1986, which the Rodale Institute field tested. For example, The Land Institute has bred a perennial wheat crop known as Kernza. By eliminating or greatly reducing the need for tillage, perennial cropping can reduce topsoil losses due to erosion, increase biological carbon sequestration, and greatly reduce waterway pollution through agricultural runoff due to less nitrogen input. Benefits: Erosion control: Because plant materials (stems, crowns, etc.) can remain in place year-round, topsoil erosion due to wind and rainfall/irrigation is reduced Water-use efficiency: Because these crops tend to be deeper and more fibrously-rooted than their annual counterparts, they are able to hold onto soil moisture more efficiently, while filtering pollutants (e.g. excess nitrogen) traveling to groundwater sources. Nutrient cycling efficiency: Because perennials more efficiently take up nutrients as a result of their extensive root systems, reduced amounts of nutrients need to be supplemented, lowering production costs while reducing possible excess sources of fertilizer runoff. Light interception efficiency: Earlier canopy development and longer green leaf duration increase the seasonal light interception efficiency of perennials, an important factor in plant productivity. Carbon sequestration: Because perennial grasses use a greater fraction of carbon to produce root systems, more carbon is integrated into soil organic matter, contributing to increases in soil organic carbon stocks. Perennial species have been shown to provide an opportunity for mitigating or reducing the negative effects of climate change while sustaining their agricultural productivity as well. It has also been shown that perennial plant communities may also enhance ecosystem resilience. As well as stability and ability to adapt to environmental fluctuations, due to them possessing high levels of biodiversity. Examples: Existing crops Fruit trees Oil palm Edible berries Asparagus Rhubarb Chives Mint Oregano Kale Under development Miscanthus giganteus - a perennial crop with high yields and high GHG mitigation potential. Perennial sunflower - a perennial oil and seedcrop developed through backcrossing genes with wild sunflower. Perennial grain - more extensive root systems allow for more efficient water and nutrient uptake, while reducing erosion due to rain and wind year-round. Perennial rice - currently in the development stage using similar methods to those used in producing the perennialized sunflower, perennial rice promises to reduce deforestation through increases in production efficiency by keeping cleared land out of the fallow stage for long periods of time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SYSGO** SYSGO: SYSGO GmbH is a German information technologies company that supplies operating systems and services for embedded systems with high safety and security-related requirements, using Linux. For security-critical applications, the company offers the Hypervisor and RTOS PikeOS, an operating system for multicore processors and the foundation for intelligent devices in the Internet of Things (IoT). As an operating system manufacturer provider, SYSGO supports companies with the formal certification of software to international standards for safety and security in markets such as aerospace and defence, industrial automation, automotive, railway, medical as well as network infrastructure. SYSGO participates in a variety of international research projects and standardisation initiatives in the area of safety and security. History: SYSGO was founded in 1991. On the initiative of company founder Knut Degen, the company specialized in the use of Linux-based operating systems in embedded applications. In the 1990s, SYSGO worked mainly with LynxOS. In 1999, the company launched the first product of its own, a development environment for Linux-based embedded applications by the name of ELinOS. SYSGO introduced the first version of its PikeOS real-time operating system in 2005. With hypervisor functionality integrated into its basic structure, this operating system allows multiple embedded applications with different functional safety requirements to be operated on the same processor. The current version of PikeOS can run safety-critical applications for aerospace, automotive, rail and other industrial applications. 2009 saw the market launch of a software-only implementation of an AFDX stack (Avionics Full DupleX Switched Ethernet) for Safety-Critical Ethernet in accordance with ARINC-664 Part 7, which was certified to DO-178B. In 2013, SYSGO also achieved SIL 4 certification on multicore processors for EN 50128, a European standard for safety-relevant software used in railway applications. The first subsidiary of the company was established in Ulm in 1997, followed by Prague (2004), Paris (2005) and Rostock (2008). In 2012, SYSGO was taken over by the Thales Group of France. In 2019 SYSGO built its new headquarters in Klein-Winternheim, near Mainz and moved in by April 2020. Products and services: SYSGO's best-known product is PikeOS, a real-time operating system with a separation kernel-based Hypervisor, which provides multiple partitions for a variety of other operating systems and equips them with time schedules. Products and services: Other products include: ELinOS, a Linux operating system for embedded applications Safety-Critical Ethernet/AFDX, a software implementation of ARINC-664 Part 7 Various components required for certificationThe PikeOS Hypervisor forms a foundation for critical systems in which both safety and security have to be ensured. The company also offers various certification kits. These certification kits include, for example, support documentation for development and testing and, if necessary, additional safety and security information to allow the development of standards-compliant systems. Research: SYSGO is the technical lead for the EU research project certMILS. The goal of certMILS is primarily to make a certified European MILS platform available, and thus simplify the certification of composite IT systems. The project is supported by the EU as part of the Horizon 2020 programme. Customers and partner network: Customers include companies that are working i.a. on solutions for the Internet of Things; especially suppliers and manufacturers in the areas aerospace and defence, automotive, railway and industrial sectors who have high safety and security requirements for their applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Agraphia** Agraphia: Agraphia is an acquired neurological disorder causing a loss in the ability to communicate through writing, either due to some form of motor dysfunction or an inability to spell. The loss of writing ability may present with other language or neurological disorders; disorders appearing commonly with agraphia are alexia, aphasia, dysarthria, agnosia, acalculia and apraxia. The study of individuals with agraphia may provide more information about the pathways involved in writing, both language related and motoric. Agraphia cannot be directly treated, but individuals can learn techniques to help regain and rehabilitate some of their previous writing abilities. These techniques differ depending on the type of agraphia. Agraphia: Agraphia can be broadly divided into central and peripheral categories. Central agraphias typically involve language areas of the brain, causing difficulty spelling or with spontaneous communication, and are often accompanied by other language disorders. Peripheral agraphias usually target motor and visuospatial skills in addition to language and tend to involve motoric areas of the brain, causing difficulty in the movements associated with writing. Central agraphia may also be called aphasic agraphia as it involves areas of the brain whose major functions are connected to language and writing; peripheral agraphia may also be called nonaphasic agraphia as it involves areas of the brain whose functions are not directly connected to language and writing (typically motor areas).The history of agraphia dates to the mid-fourteenth century, but it was not until the second half of the nineteenth century that it sparked significant clinical interest. Research in the twentieth century focused primary on aphasiology in patients with lesions from strokes. Characteristics: Agraphia or impairment in producing written language can occur in many ways and many forms because writing involves many cognitive processes (language processing, spelling, visual perception, visuospatial orientation for graphic symbols, motor planning, and motor control of handwriting).Agraphia has two main subgroupings: central ("aphasic") agraphia and peripheral ("nonaphasic") agraphia. Central agraphias include lexical, phonological, deep, and semantic agraphia. Peripheral agraphias include allographic, apraxic, motor execution, hemianoptic and afferent agraphia. Characteristics: Central Central agraphia occurs when there are both impairments in spoken language and impairments to the various motor and visualization skills involved in writing. Individuals who have agraphia with fluent aphasia write a normal quantity of well-formed letters, but lack the ability to write meaningful words. Receptive aphasia is an example of fluent aphasia. Those who have agraphia with nonfluent aphasia can write brief sentences but their writing is difficult to read. Their writing requires great physical effort but lacks proper syntax and often has poor spelling. Expressive aphasia is an example of nonfluent aphasia. Individuals who have Alexia with agraphia have difficulty with both the production and comprehension of written language. This form of agraphia does not impair spoken language. Characteristics: Deep agraphia affects an individual's phonological ability and orthographic memory. Deep agraphia is often the result of a lesion involving the left parietal region (supramarginal gyrus or insula). Individuals can neither remember how words look when spelled correctly, nor sound them out to determine spelling. Individuals typically rely on their damaged orthographic memory to spell; this results in frequent errors, usually semantic in nature. Individuals have more difficulty with abstract concepts and uncommon words. Reading and spoken language are often impaired as well. Characteristics: Gerstmann syndrome agraphia is the impairment of written language production associated with the following structural symptoms: difficulty discriminating between one's own fingers, difficulty distinguishing left from right, and difficulty performing calculations. All four of these symptoms result from pathway lesions. Gerstmann's syndrome may additionally be present with alexia and mild aphasia. Global agraphia also impairs an individual's orthographic memory although to a greater extent than deep agraphia. In global apraxia, spelling knowledge is lost to such a degree that the individual can only write very few meaningful words, or cannot write any words at all. Reading and spoken language are also markedly impaired. Characteristics: Lexical and structural agraphia are caused by damage to the orthographic memory; these individuals cannot visualize the spelling of a word, though they do retain the ability to sound them out. This impaired spelling memory can imply the loss or degradation of the knowledge or just an inability to efficiently access it. There is a regularity effect associated with lexical agraphia in that individuals are less likely to correctly spell words without regular, predictable spellings. Additionally, spelling ability tends to be less impaired for common words. Individuals also have difficulty with homophones. Language competence in terms of grammar and sentence writing tends to be preserved. Characteristics: Phonological agraphia is the opposite of lexical agraphia in that the ability to sound out words is impaired, but the orthographical memory of words may be intact. It is associated with a lexicality effect by a difference in the ability to spell words versus nonwords; individuals with this form of agraphia are depending on their orthographic memory. Additionally, it is often harder for these individuals to access more abstract words without strong semantic representations (i.e., it is more difficult for them to spell prepositions than concrete nouns). Characteristics: Pure agraphia is the impairment in written language production without any other language or cognitive disorder.Agraphia can occur separately or co-occur and can be caused by damage to the angular gyrus Peripheral Peripheral agraphias occurs when there is damage to the various motor and visualization skills involved in writing. Characteristics: Apraxic agraphia is the impairment in written language production associated with disruption of the motor system. It results in distorted, slow, effortful, incomplete, and/or imprecise letter formation. Though written letters are often so poorly formed that they are almost illegible, the ability to spell aloud is often retained. This form of agraphia is caused specifically by a loss of specialized motor plans for the formation of letters and not by any dysfunction affecting the writing hand. Apraxic agraphia may present with or without ideomotor apraxia. Paralysis, chorea, Parkinson's disease (micrographia), and dystonia (writer's cramp) are motor disorders commonly associated with agraphia. Characteristics: Hysterical agraphia is the impairment in written language production caused by a conversion disorder. Reiterative agraphia is found in individuals who repeat letters, words, or phrases in written language production an abnormal number of times. Perseveration, paragraphia, and echographia are examples of reiterative agraphia. Characteristics: Visuospatial agraphia is the impairment in written language production defined by a tendency to neglect one portion (often an entire side) of the writing page, slanting lines upward or downward, and abnormal spacing between letters, syllables, and words. The orientation and correct sequencing of the writing will also be impaired. Visuospatial agraphia is frequently associated with left hemispatial neglect, difficulty in building or assembling objects, and other spatial difficulties. Causes: Agraphia has a multitude of causes ranging from strokes, lesions, traumatic brain injury, and dementia. Twelve regions of the brain are associated with handwriting. The four distinct functional areas are the left superior frontal area composed of the middle frontal gyrus and the superior frontal sulcus, the left superior parietal area composed of the inferior parietal lobule, the superior parietal lobule and the intraparietal sulcus and lastly the primary motor cortex and the somatosensory cortex. The eight other areas are considered associative areas and are the right anterior cerebellum, the left posterior nucleus of the thalamus, the left inferior frontal gyrus, the right posterior cerebellum, the right superior frontal cortex, the right inferior parietal lobule, the left fusiform gyrus and the left putamen. The specific type of agraphia resulting from brain damage will depend on which area of the brain was damaged. Causes: Phonological agraphia is linked to damage in areas of the brain involved in phonological processing skills (sounding out words), specifically the language areas around the sylvian fissure, such as Broca's area, Wernicke's area, and the supramarginal gyrus.Lexical agraphia is associated with damage to the left angular gyrus and/or posterior temporal cortex. The damage is typically posterior and inferior to the perisylvian language areas.Deep agraphia involves damage to the same areas of the brain as lexical agraphia plus some damage to the perisylvian language areas as well. More extensive left hemisphere damage can lead to global agraphia.Gerstmann's syndrome is caused by a lesion of the dominant (usually the left) parietal lobe, usually an angular gyrus lesion.Apraxic agraphia with ideomotor apraxia is typically caused by damage to the superior parietal lobe (where graphomotor plans are stored) or the premotor cortex (where the plans are converted into motor commands). Additionally, some individuals with cerebellar lesions (more typically associated with non-apraxic motor dysfunction) develop apraxic agraphia. Apraxic agraphia without ideomotor apraxia may be caused by damage to either of the parietal lobes, the dominant frontal lobe, or to the dominant thalamus.Visuospatial agraphia typically has a right hemisphere pathology. Damage to the right frontal area of the brain may cause more motor defects, whereas damage to the posterior part of the right hemisphere leads predominantly to spatial defects in writing. Causes: Alzheimer's disease Agraphia is often seen in association with Alzheimer's disease (AD). Writing disorders can be an early manifestation of AD. In individuals with AD, the first sign pertaining to writing skills is the selective syntactic simplification of their writing. Individuals will write with less description, detail and complexity, and other markers, such as grammatical errors, may emerge. Different agraphias may develop as AD progresses. In the beginning stages of AD, individuals show signs of allographic agraphia and apraxic agraphia. Allographic agraphia is represented in AD individuals by the mixing of lower and upper case letters in words; apraxic agraphia is represented in AD patients through poorly constructed or illegible letters and omission or over repetition of letter strokes. As their AD progresses, so does the severity of their agraphia; they may begin to form spatial agraphia, which is the inability to write in a straight horizontal line, and there are often unnecessary gaps between letters and words.A connection between AD and agraphia is the role of memory in normal writing ability. Normal spellers have access to a lexical spelling system that uses a whole-word; when functioning properly, it allows for recall of the spelling of a complete word, not as individual letters or sounds. This system further uses an internal memory store where the spellings of hundreds of words are kept. This is called the graphemic output lexicon and is aptly named in relation to the graphemic buffer, which is the short term memory loop for many of the functions involved in handwriting. When the spelling system cannot be used, such as with unfamiliar words, non-words or words that we do not recognize the spelling for, some people are able to use the phonological process called the sub-lexical spelling system. This system is used to sound out a word and spell it. In AD individuals, memory stores that are used for everyday handwriting are lost as the disease progresses. Management: Agraphia cannot be directly treated, but individuals can be rehabilitated to regain some of their previous writing abilities.For the management of phonological agraphia, individuals are trained to memorize key words, such as a familiar name or object, that can then help them form the grapheme for that phoneme. Management of allographic agraphia can be as simple as having alphabet cards so the individual can write legibly by copying the correct letter shapes. There are few rehabilitation methods for apraxic agraphia; if the individual has considerably better hand control and movement with typing than they do with handwriting, then they can use technological devices. Texting and typing do not require the same technical movements that handwriting does; for these technological methods, only spatial location of the fingers to type is required. If copying skills are preserved in an individual with apraxic agraphia, repeated copying may help shift from the highly intentional and monitored hand movements indicative of apraxic agraphia to a more automated control.Micrographia is a condition that can occur with the development of other disorders, such as Parkinson's disease, and is when handwriting becomes illegible because of small writing. For some individuals, a simple command to write bigger eliminates the issue. Management: Anagram and Copy Treatment (ACT) uses the arrangement of component letters of target words and then repeated copying of the target word. This is similar to the CART; the main difference is that the target words for ACT are specific to the individual. Target words that are important in the life of the individual are emphasized because people with deep or global agraphias do not typically have the same memory for the words as other people with agraphia may. Writing can be even more important to these people as it can cue spoken language. ACT helps in this by facilitating the relearning of a set of personally relevant written words for use in communication. Management: Copy and Recall Treatment (CART) method helps to reestablish the ability to spell specific words that are learned through repeated copying and recall of target words. CART is more likely to be successful in treating lexical agraphia when a few words are trained to mastery than when a large group of unrelated words is trained. Words chosen can be individualized to the patient, which makes treatment more personalized. Management: Graphemic buffer uses the training of specific words to improve spelling. Cueing hierarchies and copy and recall method of specific words are used, to work the words into the short-term memory loop, or graphemic buffer. The segmentation of longer words into shorter syllables helps bring words into short-term memory. Problem solving approach is used as a self-correcting method for phonological errors. The individual sounds out the word and attempts to spell it, typically using an electronic dictionary-type device that indicates correct spelling. This method takes advantage of the preserved sound-to-letter correspondences when they are intact. This approach may improve access to spelling memory, strengthen orthographic representations, or both. History: In 1553 Thomas Wilson's book Arte of Rhetorique held the earliest known description of what would now be called acquired agraphia. In the second half of the nineteenth century, the loss of the ability to produce written language received clinical attention, when ideas about localization in the brain influenced studies about dissociation between written and spoken language as well as reading and writing. Paul Broca's work on aphasia during this time inspired researchers across Europe and North America to begin conducting studies on the correlation between lesions and loss of function in various cortical areas.During the 1850s, clinicians such as Armand Trousseau and John Hughlings Jackson held the prevailing view that the same linguistic deficiency occurred in writing as well as speech and reading impairments. In 1856, Louis-Victor Marcé argued that written and spoken language were independent of each other; he discovered that in many patients with languages disorders, both speech and writing was impaired. The recovery of written and spoken language was not always parallel suggesting that these two modes of expression were independent. He believed the ability to write not only involved motor control, but also the memory of the signs and their meaning.In 1867, William Ogle, who coined the term agraphia, made several key observations about the patterns of dissociation found in written and spoken language. He demonstrated that some patients with writing impairments were able to copy written letters but struggled arranging the letters to form words. Ogle knew that aphasia and agraphia often occurred together, but he confirmed that the impairment of two different types of language (spoken and written) can vary in type and severity. Although Ogle's review helped make important advancements toward understanding writing disorders, a documented case of pure agraphia was missing.In 1884, over two decades after the research of acquired language disorders began, Albert Pitres made an important contribution when he published a clinical report of pure agraphia. According to Pitres, Marcé and Ogle were the first to emphasize the dissociation between speech and writing. His work was also strongly influenced by Théodule-Armand Ribot's modular approach to memory. Pitres's clinical case study in 1884 argues for the localization of writing in the brain.Pitres's reading and writing models consisted of three main components: visual (the memory for letters and how letters are put together to form syllables and word), auditory (the memory for the sounds of each letter), and motor (motor-graphic memory of the letters). He proposed the following classifications of agraphia: Agraphia by word blindness: inability to copy a model, but the individual can write spontaneously and in response to dictation. History: Agraphia by word deafness: inability to write to dictation, but the individual can copy a model and write spontaneously. Motor agraphia: no ability to write, but the individual can spell.Pitres said in aphasia, the intellect is not systematically impaired.Research in the twentieth century focused primarily on aphasiology in patients with lesions from cerebrovascular accidents. From these studies, researches gained significant insight into the complex cognitive process of producing written language.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long posterior ciliary arteries** Long posterior ciliary arteries: The long posterior ciliary arteries are arteries of the orbit. There are long posterior ciliary arteries two on each side of the body. They are branches of the ophthalmic artery. They pass forward within the eye to reach the ciliary body where they ramify and anastomose with the anterior ciliary arteries, thus forming the major arterial circle of the iris.The long posterior ciliary arteries contribute arterial supply to the choroid, ciliary body, and iris. Anatomy: There are two long ciliary arteries. They are branches of the ophthalmic artery. Anatomy: Course and relations The long posterior ciliary arteries first run near the optic nerve before piercing the posterior sclera near the optic nerve. They pass anterior-ward - one along each side of the eyeball - between the sclera and choroid to reach the ciliary muscle where they divide into two branches which go on to form the major arterial circle of the iris. Anatomy: Anastomoses Non-terminal branches of the long posterior ciliary arteries anastomose with branches of the short posterior ciliary arteries.Upon reaching the ciliary body, the long posterior ciliary arteries ramify superiorly and inferiorly, the branches forming anastomoses with each other and with those of the anterior ciliary arteries to form the major arterial circle of the iris. Distribution The long posterior ciliary arteries supply the choroid, ciliary body, and iris.Non-terminal branches are distributed to the ciliary muscle/ciliary body, and anterior choroid. Terminal branches are distributed to the iris and ciliary body via the major arterial circle of the iris.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic/Dialup Users List** Dynamic/Dialup Users List: A Dial-up/Dynamic User List (DUL) is a type of DNSBL which contains the IP addresses an ISP assigns to its customer on a temporary basis, often using DHCP or similar protocols. Dynamically assigned IP addresses are contrasted with static IP addresses which do not change once they have been allocated by the service provider. Dynamic/Dialup Users List: DULs serve several purposes. Their primary function is to assist an ISP in enforcement of its Acceptable Use Policy, many of which prohibit customers from setting up an email server. Customers are expected to use the email facilities of the service provider. This use of a DUL is especially helpful in curtailing abuse when a customer's computer has been converted into a zombie computer and is distributing email without the knowledge of the computer's owner. A second major use involves receivers who do not wish to accept email from computers with dynamically assigned email addresses. They use DULs to enforce this policy. Receivers adopt such policies because computers at dynamically assigned IP addresses so often are a source of spam. Dynamic/Dialup Users List: The first DUL was created by Gordon Fecyk in 1998. It quickly became quite popular because it addressed a specific tactic popular with spammers at the time. The DUL subsequently was absorbed by Mail Abuse Prevention System (MAPS) in 1999. When MAPS was no longer a free service, other DNSBLs such as Dynablock, Not Just Another Bogus List (NJABL), and Spam and Open Relay Blocking System (SORBS) began providing lists of dynamically assigned IP addresses.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High Technology Theft Apprehension and Prosecution Program** High Technology Theft Apprehension and Prosecution Program: The High Technology Theft Apprehension and Prosecution Program (HTTAP Program) is a program within the California Emergency Management Agency (CalEMA) concerned with high technology crime including white-collar crime, cracking, computerized money laundering, theft of services, copyright infringement of software, remarking and counterfeiting of computer hardware and software, and industrial espionage. High Technology Crime Advisory Committee: The High Technology Crime Advisory Committee was "established for the purpose of formulating a comprehensive written strategy for addressing high technology crime throughout the state" and is composed of the following individuals appointed by the CalEMA Secretary: a designee of the California District Attorneys Association a designee of the California State Sheriffs Association a designee of the California Police Chiefs Association a designee of the California Attorney General a designee of the California Highway Patrol a designee of the High Technology Crime Investigation Association a designee of the California Emergency Management Agency a designee of the American Electronics Association to represent California computer system manufacturers a designee of the American Electronics Association to represent California computer software producers a designee of CTIA - The Wireless Association a representative of the California Internet industry a designee of the Semiconductor Equipment and Materials International a designee of the California Cable & Telecommunications Association a designee of the Motion Picture Association of America a designee of the California Communications Associations (CalCom) a representative of the California banking industry a representative of the California Office of Information Security and Privacy Protection a representative of the California Department of Finance a representative of the California State Chief Information Officer a representative of the Recording Industry Association of America a representative of the Consumers Union Task Forces: The program is implemented by funding and supporting independent regional task forces: the Computer and Technology Crime High-Tech Response Team (CATCH) of the San Diego County District Attorney's Office the Northern California Computer Crimes Task Force (NC3TF) of the Marin County District Attorney's Office the Rapid Enforcement Allied Computer Team (REACT) of the Santa Clara County District Attorney's Office the Southern California High Tech Task Force (SCHTTF) of the Los Angeles County Sheriff's Department the Sacramento Valley Hi-Tech Crimes Task Force (SVHTCTF) of the Sacramento County Sheriff's Department
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Amylase** Amylase: An amylase () is an enzyme that catalyses the hydrolysis of starch (Latin amylum) into sugars. Amylase is present in the saliva of humans and some other mammals, where it begins the chemical process of digestion. Foods that contain large amounts of starch but little sugar, such as rice and potatoes, may acquire a slightly sweet taste as they are chewed because amylase degrades some of their starch into sugar. The pancreas and salivary gland make amylase (alpha amylase) to hydrolyse dietary starch into disaccharides and trisaccharides which are converted by other enzymes to glucose to supply the body with energy. Plants and some bacteria also produce amylase. Specific amylase proteins are designated by different Greek letters. All amylases are glycoside hydrolases and act on α-1,4-glycosidic bonds. Classification: α-Amylase The α-amylases (EC 3.2.1.1 ) (CAS 9014-71-5) (alternative names: 1,4-α-D-glucan glucanohydrolase; glycogenase) are calcium metalloenzymes. By acting at random locations along the starch chain, α-amylase breaks down long-chain saccharides, ultimately yielding either maltotriose and maltose from amylose, or maltose, glucose and "limit dextrin" from amylopectin. They belong to glycoside hydrolase family 13 (https://www.cazypedia.org/index.php/Glycoside_Hydrolase_Family_13). Because it can act anywhere on the substrate, α-amylase tends to be faster-acting than β-amylase. In animals, it is a major digestive enzyme, and its optimum pH is 6.7–7.0.In human physiology, both the salivary and pancreatic amylases are α-amylases. The α-amylase form is also found in plants, fungi (ascomycetes and basidiomycetes) and bacteria (Bacillus). Classification: β-Amylase Another form of amylase, β-amylase (EC 3.2.1.2 ) (alternative names: 1,4-α-D-glucan maltohydrolase; glycogenase; saccharogen amylase) is also synthesized by bacteria, fungi, and plants. Working from the non-reducing end, β-amylase catalyzes the hydrolysis of the second α-1,4 glycosidic bond, cleaving off two glucose units (maltose) at a time. During the ripening of fruit, β-amylase breaks starch into maltose, resulting in the sweet flavor of ripe fruit. They belong to glycoside hydrolase family 14. Classification: Both α-amylase and β-amylase are present in seeds; β-amylase is present in an inactive form prior to germination, whereas α-amylase and proteases appear once germination has begun. Many microbes also produce amylase to degrade extracellular starches. Animal tissues do not contain β-amylase, although it may be present in microorganisms contained within the digestive tract. The optimum pH for β-amylase is 4.0–5.0. Classification: γ-Amylase γ-Amylase (EC 3.2.1.3 ) (alternative names: Glucan 1,4-a-glucosidase; amyloglucosidase; exo-1,4-α-glucosidase; glucoamylase; lysosomal α-glucosidase; 1,4-α-D-glucan glucohydrolase) will cleave α(1–6) glycosidic linkages, as well as the last α-1,4 glycosidic bond at the nonreducing end of amylose and amylopectin, yielding glucose. The γ-amylase has the most acidic optimum pH of all amylases because it is most active around pH 3. They belong to a variety of different GH families, such as glycoside hydrolase family 15 in fungi, glycoside hydrolase family 31 of human MGAM, and glycoside hydrolase family 97 of bacterial forms. Uses: Fermentation α- and β-amylases are important in brewing beer and liquor made from sugars derived from starch. In fermentation, yeast ingests sugars and excretes ethanol. In beer and some liquors, the sugars present at the beginning of fermentation have been produced by "mashing" grains or other starch sources (such as potatoes). In traditional beer brewing, malted barley is mixed with hot water to create a "mash", which is held at a given temperature to allow the amylases in the malted grain to convert the barley's starch into sugars. Different temperatures optimize the activity of alpha or beta amylase, resulting in different mixtures of fermentable and unfermentable sugars. In selecting mash temperature and grain-to-water ratio, a brewer can change the alcohol content, mouthfeel, aroma, and flavor of the finished beer. Uses: In some historic methods of producing alcoholic beverages, the conversion of starch to sugar starts with the brewer chewing grain to mix it with saliva. This practice continues to be practiced in home production of some traditional drinks, such as chhaang in the Himalayas, chicha in the Andes and kasiri in Brazil and Suriname. Uses: Flour additive Amylases are used in breadmaking and to break down complex sugars, such as starch (found in flour), into simple sugars. Yeast then feeds on these simple sugars and converts it into the waste products of ethanol and carbon dioxide. This imparts flavour and causes the bread to rise. While amylases are found naturally in yeast cells, it takes time for the yeast to produce enough of these enzymes to break down significant quantities of starch in the bread. This is the reason for long fermented doughs such as sourdough. Modern breadmaking techniques have included amylases (often in the form of malted barley) into bread improver, thereby making the process faster and more practical for commercial use.α-Amylase is often listed as an ingredient on commercially package-milled flour. Bakers with long exposure to amylase-enriched flour are at risk of developing dermatitis or asthma. Uses: Molecular biology In molecular biology, the presence of amylase can serve as an additional method of selecting for successful integration of a reporter construct in addition to antibiotic resistance. As reporter genes are flanked by homologous regions of the structural gene for amylase, successful integration will disrupt the amylase gene and prevent starch degradation, which is easily detectable through iodine staining. Uses: Medical uses Amylase also has medical applications in the use of pancreatic enzyme replacement therapy (PERT). It is one of the components in Sollpura (liprotamase) to help in the breakdown of saccharides into simple sugars. Other uses An inhibitor of alpha-amylase, called phaseolamin, has been tested as a potential diet aid.When used as a food additive, amylase has E number E1100, and may be derived from pig pancreas or mold fungi. Bacilliary amylase is also used in clothing and dishwasher detergents to dissolve starches from fabrics and dishes. Factory workers who work with amylase for any of the above uses are at increased risk of occupational asthma. Five to nine percent of bakers have a positive skin test, and a fourth to a third of bakers with breathing problems are hypersensitive to amylase. Hyperamylasemia: Blood serum amylase may be measured for purposes of medical diagnosis. A higher than normal concentration may reflect any of several medical conditions, including acute inflammation of the pancreas (which may be measured concurrently with the more specific lipase), perforated peptic ulcer, torsion of an ovarian cyst, strangulation, ileus, mesenteric ischemia, macroamylasemia and mumps. Amylase may be measured in other body fluids, including urine and peritoneal fluid. Hyperamylasemia: A January 2007 study from Washington University in St. Louis suggests that saliva tests of the enzyme could be used to indicate sleep deficits, as the enzyme increases its activity in correlation with the length of time a subject has been deprived of sleep. History: In 1831, Erhard Friedrich Leuchs (1800–1837) described the hydrolysis of starch by saliva, due to the presence of an enzyme in saliva, "ptyalin", an amylase. it was named after the Ancient Greek name for saliva: πτύαλον - ptyalon. The modern history of enzymes began in 1833, when French chemists Anselme Payen and Jean-François Persoz isolated an amylase complex from germinating barley and named it "diastase". It is from this term that all subsequent enzyme names tend to end in the suffix -ase. In 1862, Alexander Jakulowitsch Danilewsky (1838–1923) separated pancreatic amylase from trypsin. Evolution: Salivary amylase Saccharides are a food source rich in energy. Large polymers such as starch are partially hydrolyzed in the mouth by the enzyme amylase before being cleaved further into sugars. Many mammals have seen great expansions in the copy number of the amylase gene. These duplications allow for the pancreatic amylase AMY2 to re-target to the salivary glands, allowing animals to detect starch by taste and to digest starch more efficiently and in higher quantities. This has happened independently in mice, rats, dogs, pigs, and most importantly, humans after the agricultural revolution.Following the agricultural revolution 12,000 years ago, human diet began to shift more to plant and animal domestication in place of hunting and gathering. Starch has become a staple of the human diet. Evolution: Despite the obvious benefits, early humans did not possess salivary amylase, a trend that is also seen in evolutionary relatives of the human, such as chimpanzees and bonobos, who possess either one or no copies of the gene responsible for producing salivary amylase.Like in other mammals, the pancreatic alpha-amylase AMY2 was duplicated multiple times. One event allowed it to evolve salivary specificity, leading to the production of amylase in the saliva (named in humans as AMY1). The 1p21.1 region of human chromosome 1 contains many copies of these genes, variously named AMY1A, AMY1B, AMY1C, AMY2A, AMY2B, and so on.However, not all humans possess the same number of copies of the AMY1 gene. Populations known to rely more on saccharides have a higher number of AMY1 copies than human populations that, by comparison, consume little starch. The number of AMY1 gene copies in humans can range from six copies in agricultural groups such as European-American and Japanese (two high starch populations) to only two to three copies in hunter-gatherer societies such as the Biaka, Datog, and Yakuts.The correlation that exists between starch consumption and number of AMY1 copies specific to population suggest that more AMY1 copies in high starch populations has been selected for by natural selection and considered the favorable phenotype for those individuals. Therefore, it is most likely that the benefit of an individual possessing more copies of AMY1 in a high starch population increases fitness and produces healthier, fitter offspring.This fact is especially apparent when comparing geographically close populations with different eating habits that possess a different number of copies of the AMY1 gene. Such is the case for some Asian populations that have been shown to possess few AMY1 copies relative to some agricultural populations in Asia. This offers strong evidence that natural selection has acted on this gene as opposed to the possibility that the gene has spread through genetic drift.Variations of amylase copy number in dogs mirrors that of human populations, suggesting they acquired the extra copies as they followed humans around. Unlike humans whose amylase levels depend on starch content in diet, wild animals eating a broad range of foods tend to have more copies of amylase. This may have to do with mainly detection of starch as opposed to digestion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Containerization (computing)** Containerization (computing): In software engineering, containerization is operating system-level virtualization or application-level virtualization over multiple network resources so that software applications can run in isolated user spaces called containers in any cloud or non-cloud environment, regardless of type or vendor. Usage: The containers are basically a fully functional and portable cloud or non-cloud computing environment surrounding the application and keeping it independent of other parallelly running environments. Individually each container simulates a different software application and runs isolated processes by bundling related configuration files, libraries and dependencies. But, collectively, multiple containers share a common operating system kernel (OS).In recent times, the containerization technology has been widely adopted by cloud computing platforms like Amazon Web Services, Microsoft Azure, Google Cloud Platform, and IBM Cloud. Containerization has also been pursued by the U.S. Department of Defense as a way of more rapidly developing and fielding software updates, with first application in its F-22 air superiority fighter. Types of containers: OS containers Application containers Security issues Because of the shared OS, security threats can affect the whole containerized system. In containerized environments, security scanners generally protect the OS but not the application containers, which adds unwanted vulnerability. Container management, orchestration, clustering: Container orchestration or container management is mostly used in the context of application containers. Implementations providing such orchestration include Kubernetes and Docker swarm. Container cluster management: Container clusters need to be managed. This includes functionality to create a cluster, to upgrade the software or repair it, balance the load between existing instances, scale by starting or stopping instances to adapt to the number of users, to log activities and monitor produced logs or the application itself by querying sensors. Open-source implementations of such software include OKD and Rancher. Quite a number of companies provide container cluster management as a managed service, like Alibaba, Amazon, Google, Microsoft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KID** KID: KID (an acronym standing for Kindle Imagine Develop) was a Japan-based company specializing in porting and developing bishōjo games. History: KID was founded in 1988, with capital of 160 million yen. In the early 1990s, it served primarily as a contract developer. Notable titles from this era include Burai Fighter, Low G Man, G.I. Joe, Isolated Warrior and Recca. In 1997, it began porting PC games to games consoles. In 1999, it released an original title called Memories Off on PlayStation, which later became its first well-known series. In 2000, it released the original title Never 7: The End of Infinity, the first in the Infinity series. KID also created the popular underground PlayStation game Board Game Top Shop. In 2005, KID became a sponsor of the Japanese drama series Densha Otoko. History: The company declared bankruptcy in 2006. However, in February 2007 it was announced that KID's intellectual properties had been acquired by the CyberFront Corporation, which would continue all unfinished projects until its closure in December 2013. Kaga Create then bought CyberFront Corporation and owned the rights to KID's works. After Kaga Create closed down, 5pb. bought Cyberfront's assets which also included all of KID's works. Works: Infinity series Infinity Cure Never 7: The End of Infinity Ever 17: The Out of Infinity Remember 11: The Age of Infinity 12Riven: The Psi-Climinal of Integral Memories Off series Memories Off Memories Off 2nd You that became a Memory ~Memories Off~ Memories Off ~And then~ Memories Off ~And Then Again~ Memories Off 5: Togireta Film Memories Off #5 encore Your Memories Off: Girl's Style Other Blocken (Arcade) Armored Police Metal Jack (Game Boy) Kingyo Chūihō! 2 Gyopichan o Sagase! (Game Boy) Battle Grand Prix (SNES) Jumpin' Derby (Super Famicom) Super Bowling (SNES) Super Jinsei Game (series) (2 & 3) (Super Famicom) Chibi Maruko-chan: Okozukai Daisakusen (Game Boy, 1990) Chibi Maruko-Chan 2: Deluxe Maruko World (Game Boy, 1991) Chibi Maruko-chan 3: Mezase! Game Taishou no Maki (Game Boy, 1992) Chibi Maruko-chan 4: Korega Nihon Dayo Ouji Sama (Game Boy, 1992) Chibi Maruko-Chan: Maruko Deluxe Gekijou (Game Boy, 1995) Burai Fighter Low G Man: The Low Gravity Man Bananan Ouji no Daibouken Kick Master G.I. Joe G.I. Joe: The Atlantis Factor Rock 'n' Ball Sumo Fighter: Tōkaidō Basho UFO Kamen Yakisoban Sutobasu Yarō Shō: 3 on 3 Basketball Mini 4WD Shining Scorpion Let's & Go!! Pepsiman Doki! Doki! Yūenchi: Crazy Land Daisakusen (Famicom) Ai Yori Aoshi (PS2 and PC adaptation) Ryu-Koku (final game released before the bankruptcy) Separate Hearts Ski Air Mix Recca (Famicom Shooter created for the "Summer Carnival '92" gaming tournament) We Are* Close to: Inori no Oka Yume no Tsubasa Max Warrior: Wakusei Kaigenrei Kaitou Apricot (PlayStation) Kiss yori... (Sega Saturn and WonderSwan) 6 Inch my Darling (Sega Saturn) Dokomademo Aoku... (consumer port of TopCat's Hateshinaku Aoi, Kono Sora no Shita de...) Kagayaku Kisetsu e (consumer port of Tactics' One: Kagayaku Kisetsu e) She'sn Screen (consumer port of Ather's Campus ~Sakura no Mau Naka de~) Emmyrea (consumer port of Penguin Soft's Nemureru Mori no Ohime-sama) My Merry May Iris Flamberge no Seirei (consumer port of Nikukyuu's Mei King) Prism Heart (Dreamcast) Oujisama Lv1 (PlayStation) Boku to Bokura no Natsu (Dreamcast) Monochrome (PlayStation 2 and PSP) Hōkago Ren'ai Club – Koi no Etude (Sega Saturn) Subete ga F ni Naru (PlayStation)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Staircase voltammetry** Staircase voltammetry: Staircase voltammetry is a derivative of linear sweep voltammetry. In linear sweep voltammetry the current at a working electrode is measured while the potential between the working electrode and a reference electrode is swept linearly in time. Oxidation or reduction of species is registered as a peak or trough in the current signal at the potential at which the species begins to be oxidized or reduced. In staircase voltammetry, the potential sweep is a series of stair steps. The current is measured at the end of each potential change, right before the next, so that the contribution to the current signal from the capacitive charging current is reduced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hall-Riggs syndrome** Hall-Riggs syndrome: Hall-Riggs syndrome is a rare genetic disorder that causes neurological issues and birth defects. People with Hall-Riggs syndrome usually have skeletal dysplasia, facial deformities, and intellectual disabilities. Only 8 cases from 2 families worldwide have been described in medical literature. It is an autosomal recessive genetic disorder, meaning both parents must carry the gene in order for their offspring to be affected.Common characteristics of Hall-Riggs syndrome include: spondyloepimetaphyseal dysplasia short stature shortened limbs, fingers, and toes microcephaly scoliosis seizures widened nasal bridge and mouth other dysmorphic facial features intellectual disabilities recurrent vomiting episodes Cases: 1975: Hall and Riggs describe 6 out of 15 children born to consanguineous parents. Said children had severe intellectual deficit, microcephaly, facial dysmorphisms consisting of nostril anteversion, depressed nasal bridge and large lips, and progressive dysplasia of the skeletal system, including scoliosis, flattened femoral heads, shortened femoral necks, shortened proximal segments of the arms, growth delays, and epiphyseal flattening affecting the fingers and ankles. Said children didn't acquire the ability of speech even in adulthood. The parents of the 15 children were healthy, unaffected first-cousins. Cases: 2000: Silengo and Rigardwtto describe two Italian siblings of the opposite sex born to healthy, unaffected non-consanguineous parents. Said children had the same symptoms as the previously described family alongside short stature and hypertelorbitism, Spondylometaphyseal dysplasia and mild epiphyseal changes were confirmed through radiographs. MRI findings included the presence of cavum vergae and multiple cysts in the septum pellucidum. EEGs came back abnormal. High-resolution karyotypes came back normal. The brother had a history of seizures and psychomotor instability and agitation. Other symptoms included brachydactyly type D, dorsal kyphosis, platyspondyly, enamel hypoplasia, coarse and thick hair, and feeding difficulties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hanover bars** Hanover bars: Hanover bars, in one of the PAL television video formats, are an undesirable visual artifact in the reception of a television image. The name refers to the city of Hannover, in which the PAL system developer Telefunken Fernseh und Rundfunk GmbH was located. Hanover bars: The PAL system encodes color as YUV. The U (corresponding to B-Y) and V (corresponding to R-Y) signals carry the color information for a picture, with the phase of the V signal reversed (i.e. shifted through 180 degrees) on alternate lines (hence the name PAL, or phase alternate line). This is done to cancel minor phase errors in the reception process. However, if gross errors occur, complementary errors from the V signal carry into the U signal, and thus visible stripes occur.Later PAL systems introduced alterations to ensure that Hanover bars do not occur, introducing a swinging burst to the color synchronization. Other PAL systems may handle this problem differently. Suppression of Hanover bars: To suppress Hanover bars, PAL color decoders use a delay line that repeats the chroma information from each previous line and blends it with the current line. This causes phase errors to cancel out, at the cost of vertical color resolution, and in early designs, also a loss of color saturation proportional to the phase error.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mastering (audio)** Mastering (audio): Mastering, a form of audio post production, is the process of preparing and transferring recorded audio from a source containing the final mix to a data storage device (the master), the source from which all copies will be produced (via methods such as pressing, duplication or replication). In recent years digital masters have become usual, although analog masters—such as audio tapes—are still being used by the manufacturing industry, particularly by a few engineers who specialize in analog mastering.Mastering requires critical listening; however, software tools exist to facilitate the process. Results depend upon the intent of the engineer, the skills of the engineer, the accuracy of the speaker monitors, and the listening environment. Mastering engineers often apply equalization and dynamic range compression in order to optimize sound translation on all playback systems. It is standard practice to make a copy of a master recording—known as a safety copy—in case the master is lost, damaged or stolen. History: Pre-1940s In the earliest days of the recording industry, all phases of the recording and mastering process were entirely achieved by mechanical processes. Performers sang and/or played into a large acoustic horn and the master recording was created by the direct transfer of acoustic energy from the diaphragm of the recording horn to the mastering lathe, typically located in an adjoining room. The cutting head, driven by the energy transferred from the horn, inscribed a modulated groove into the surface of a rotating cylinder or disc. These masters were usually made from either a soft metal alloy or from wax; this gave rise to the colloquial term waxing, referring to the cutting of a record.After the introduction of the microphone and electronic amplifier in the mid-1920s, the mastering process became electro-mechanical, and electrically driven mastering lathes came into use for cutting master discs (the cylinder format by then having been superseded). Until the introduction of tape recording, master recordings were almost always cut direct-to-disc. Only a small minority of recordings were mastered using previously recorded material sourced from other discs. History: Emergence of magnetic tape In the late 1940s, the recording industry was revolutionized by the introduction of magnetic tape. Magnetic tape was invented for recording sound by Fritz Pfleumer in 1928 in Germany, based on the invention of magnetic wire recording by Valdemar Poulsen in 1898. Not until the end of World War II could the technology be found outside Europe. The introduction of magnetic tape recording enabled master discs to be cut separately in time and space from the actual recording process.Although tape and other technical advances dramatically improved the audio quality of commercial recordings in the post-war years, the basic constraints of the electro-mechanical mastering process remained, and the inherent physical limitations of the main commercial recording media—the 78 rpm disc and later the 7-inch 45 rpm single and 33-1/3 rpm LP record—meant that the audio quality, dynamic range, and running time of master discs were still limited compared to later media such as the compact disc. History: Electro-mechanical mastering process From the 1950s until the advent of digital recording in the late 1970s, the mastering process typically went through several stages. Once the studio recording on multi-track tape was complete, a final mix was prepared and dubbed down to the master tape, usually either a single-track mono or two-track stereo tape. Prior to the cutting of the master disc, the master tape was often subjected to further electronic treatment by a specialist mastering engineer. History: After the advent of tape it was found that, especially for pop recordings, master recordings could be made so that the resulting record would sound better. This was done by making fine adjustments to the amplitude of sound at different frequency bands (equalization) prior to the cutting of the master disc. History: In large recording companies such as EMI, the mastering process was usually controlled by specialist staff technicians who were conservative in their work practices. These big companies were often reluctant to make changes to their recording and production processes. For example, EMI was very slow in taking up innovations in multi-track recording and did not install 8-track recorders in their Abbey Road Studios until the late 1960s, more than a decade after the first commercial 8-track recorders were installed by American independent studios. History: Digital technology In the 1990s, electro-mechanical processes were largely superseded by digital technology, with digital recordings stored on hard disk drives or digital tape and mastered to CD. The digital audio workstation (DAW) became common in many mastering facilities, allowing the off-line manipulation of recorded audio via a graphical user interface (GUI). Although many digital processing tools are common during mastering, it is also very common to use analog media and processing equipment for the mastering stage. Just as in other areas of audio, the benefits and drawbacks of digital technology compared to analog technology are still a matter for debate. However, in the field of audio mastering, the debate is usually over the use of digital versus analog signal processing rather than the use of digital technology for storage of audio.Digital systems have higher performance and allow mixing to be performed at lower maximum levels. When mixing to 24-bits with peaks between -3 and -10 dBFS on a mix, the mastering engineer has enough headroom to process and produce a final master. Mastering engineers recommend leaving enough headroom on the mix to avoid distortion. Reduction of dynamics by the mix or mastering engineer has resulted in a loudness war in commercial recordings. Process: The source material, ideally at the original resolution, is processed using equalization, compression, limiting and other processes. Additional operations, such as editing, specifying the gaps between tracks, adjusting level, fading in and out, noise reduction and other signal restoration and enhancement processes can also be applied as part of the mastering stage. The source material is put in the proper order, commonly referred to as assembly (or 'track') sequencing. These operations prepare the music for either digital or analog, e.g. vinyl, replication. Process: If the material is destined for vinyl release, additional processing, such as dynamic range reduction or frequency-dependent stereo–to–mono fold-down and equalization may be applied to compensate for the limitations of that medium. For compact disc release, start of track, end of track, and indexes are defined for playback navigation along with International Standard Recording Code (ISRC) and other information necessary to replicate a CD. Vinyl LP and cassettes have their own pre-duplication requirements for a finished master. Subsequently, it is rendered either to a physical medium, such as a CD-R or DVD-R, or to computer files, such as a Disc Description Protocol (DDP) file set or an ISO image. Regardless of what delivery method is chosen, the replicator factory will transfer the audio to a glass master that will generate metal stampers for replication. Process: The process of audio mastering varies depending on the specific needs of the audio to be processed. Mastering engineers need to examine the types of input media, the expectations of the source producer or recipient, the limitations of the end medium and process the subject accordingly. General rules of thumb can rarely be applied. Process: Steps of the process typically include the following: Transferring the recorded audio tracks into the Digital Audio Workstation (DAW) Sequence the separate songs or tracks as they will appear on the final release Adjust the length of the silence between songs Process or sweeten audio to maximize the sound quality for the intended medium (e.g. applying specific EQ for vinyl) Transfer the audio to the final master format (CD-ROM, half-inch reel tape, PCM 1630 U-matic tape, etc.)Examples of possible actions taken during mastering: Editing minor flaws Applying noise reduction to eliminate clicks, dropouts, hum and hiss Adjusting stereo width Equalize audio across tracks for the purpose of optimized frequency distribution Adjust volume Dynamic range compression or expansion Peak limit Inserting ISRC codes and CD text Arranging tracks in their final sequential order Fading out the ending of each song Dither
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vermilion border** Vermilion border: The vermilion border (sometimes spelled vermillion border), also called margin or zone, is the normally sharp demarcation between the lip and the adjacent normal skin. It represents the change in the epidermis from highly keratinized external skin to less keratinized internal skin. It has no sebaceous glands, sweat glands, or facial hair.It has a prominence on the face, creating a focus for cosmetics (it is where lipstick is sometimes applied) and is also a location for several skin diseases. Its functional properties, however, remain unknown. Structure: The lips are composed wholly of soft tissue. The skin of the face is thicker than the skin overlying the lips where blood vessels are closer to the surface. As a consequence, the margin of the lips shows a transition between the thicker and thinner skin, represented by the vermilion border. It therefore has the appearance of a sharp line between the coloured edge of the lip and adjoining skin.It has been described as a pale, white rolled border and also as being a red line.This fine line of pale skin accentuates the colour difference between the vermilion and normal skin. Along the upper lip, two adjacent elevations of the vermilion border form the Cupid's bow. Structure: Microanatomy The vermilion border represents the change in the epidermis from highly keratinized external skin to less keratinized internal skin. It has no sebaceous glands, sweat glands, or facial hair.There are two reasons that the border appears red in some people: The epithelium is thin and therefore the blood vessels are closer to the surface. This epithelium contains eleidin which is transparent and the blood vessels are near the surface of the papillary layer, revealing the "red blood cell" color. At the angles of the mouth, there are sebaceous glands, without hair follicles, which are called Fordyce spots. Clinical significance: The vermilion border is important in dentistry and oral pathology as a marker to detect disease, such as in actinic cheilitis. Associated diseases Perioral dermatitis is a rash typically around the mouth, that spares the vermilion border. Cheilitis glandularis may present with a burning sensation over the vermilion border. This chronic progressive condition is associated with thinning of the skin of the lips and ulceration. Infections may involve the vermilion border. Cold sores are one common infection. Impetigo is another. Skin cancer can also occur at the vermilion border. Fetal alcohol syndrome causes facial abnormalities which include a thin vermilion border with a smooth philtrum. Cosmetic appearance Sunlight exposure can blur the junction between the vermilion border and the skin. Applying lip balm and sunscreen moisturizes protects it from sunlight. Surgery A vermilionectomy (sometimes spelled vermillionectomy) is the surgical removal of the vermilion border. It is sometimes performed to treat carcinoma of the lip.Close attention is given when repairing any injury to the vermilion border. Even 1 mm of vermilion misalignment could be noticeable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LSM5** LSM5: U6 snRNA-associated Sm-like protein LSm5 is a protein that in humans is encoded by the LSM5 gene.Sm-like proteins were identified in a variety of organisms based on sequence homology with the Sm protein family (see SNRPD2; MIM 601061). Sm-like proteins contain the Sm sequence motif, which consists of 2 regions separated by a linker of variable length that folds as a loop. The Sm-like proteins are thought to form a stable heteromer present in tri-snRNP particles, which are important for pre-mRNA splicing.[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cutaneous B-cell lymphoma** Cutaneous B-cell lymphoma: Cutaneous B-cell lymphomas constitute a group of diseases that occur less commonly than cutaneous T-cell lymphoma, and are characterized histologically by B-cells that appear similar to those normally found in germinal centers of lymph nodes.: 741  Conditions included in this group are:: 740–743  Primary cutaneous diffuse large B-cell lymphoma, leg type Primary cutaneous follicular lymphoma Primary cutaneous marginal zone lymphoma Intravascular large B-cell lymphoma Plasmacytoma Plasmacytosis
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Content engineering** Content engineering: Content engineering is a term applied to an engineering specialty dealing with the complexities around the use of content in computer-facilitated environments. Content engineering: Content authoring and production, content management, content modeling, content conversion, and content use and repurposing are all areas involving this practice. It is not a specialty with wide industry recognition and is often performed on an ad hoc basis by members of software development or content production or marketing staff, but is beginning to be recognized as a necessary function in any complex content-centric project involving both content production as well as software system development mainly involving content management systems (CMS) or digital experience platforms (DXP). Content engineering: Content engineering tends to bridge the gap between groups involved in the production of content (publishing and editorial staff, marketing, sales, human resources) and more technologically oriented departments such as software development, or IT that put this content to use in web or other software-based environments, and requires an understanding of the issues and processes of both sides. Typically, content engineering involves extensive use of embedded XML technologies, XML being the most widespread language for representing structured content. Content management systems are a key technology often used in the practice of content engineering. Definition: Content engineering is the practice of organizing the shape and structure of content by deploying content and metadata models, in authoring and publishing processes in a manner that meets the requirements of an organization’s Content Strategy, and its implementation through the use of technology such as CMS, XML, schema markup, artificial intelligence, APIs and others. Purpose and goal: In very general terms, content engineering practices aim to maximize the ROI of content through content reuse and improving efficiency of content marketing, content operations, content strategy. Purpose and goal: Content engineering can help address content challenges that fairly typical organizations face: Siloed content supply chains Duplicate content in a myriad of formats Inefficient content authoring workflows Chunky, unstructured content Outdated technology Technology in place does not match needs Inability to reuse content across channels (multi-channel content) Metadata and schema are not used Lack of standards for metadata Lack of findability of content for internal and external use Poor SEO performance Inability to implement personalization The role of a content engineer: Content engineers bridge the divide between content strategists and producers and the developers and content managers who publish and distribute content. But rather than simply wedging themselves between these players, content engineers help define and facilitate the content structure during the entire content strategy, production and distribution cycle from beginning to end. With equal parts business and technology savvy, the content engineer does not see content as a static and finished piece. Rather, he or she looks at the value of the content and how it can best be adapted and personalized to serve customers and emerging content platforms, technologies, and opportunities. The role of a content engineer: Create customer experience Content marketing suffers from two fundamental limitations that constrain the true power and potential that a great content marketing plan can bring to a business' bottom line: Content relevance: how to make content more relevant and personalized to their audiences. The marketer and content strategist direct the customer experience itself, and the content engineer makes it happen with content structure, schema, metadata, microdata, taxonomy, and CMS topology. The role of a content engineer: Content agility: Marketers who are burdened with one-size-fits-all content remain stuck managing their content rather than their customers' experience. Content engineers give marketers the "super powers" to move content-powered experiences across interfaces and personalization variants. Break down barriers Empower content strategists: Content engineers work with content strategists by helping them connect content not as a fixed message, but as a modular construct which can be channeled and manipulated. Enable content producers: A content engineer will work with a content producer by helping to find new sources of content and ways the content can be combined and presented. Guide and free developers: The content engineer helps translate marketing strategy into clear technical needs and functions developers can build into content management systems Enhance content management: Develop content structures that make it easier for content writers and content managers to author to a single, very usable, interface for even complex content types that might contain dozens of elements. Engineer content for success: Content engineers help all members of a marketing team work more smoothly, with the support and structures needed to get the most out of the content they produce. Sources: "What is Content Engineering?". www.simplea.com "Content Engineer Roles and Responsibilities". www.stc.org - Society of Technical Communication, 2020 "Is Your Content Plan Equipped for Content Engineering?". www.contentmarketinginstitute.com "John Collins: Content Engineering – Episode 106". wwwellessmedia.com "I am a Content Engineer". www.everypageispageone.com
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reaction inhibitor** Reaction inhibitor: A reaction inhibitor is a substance that decreases the rate of, or prevents, a chemical reaction. A catalyst, in contrast, is a substance that increases the rate of a chemical reaction. Examples: Added acetanilide slows the decomposition of drug-store hydrogen peroxide solution, inhibiting the reaction2H2O2 → 2H2O + O2, which is catalyzed by heat, light, and impurities. Inhibition of a catalyst: An inhibitor can reduce the effectiveness of a catalyst in a catalysed reaction (either a non-biological catalyst or an enzyme). E.g., if a compound is so similar to (one of) the reactants that it can bind to the active site of a catalyst but does not undergo a catalytic reaction then that catalyst molecule cannot perform its job because the active site is occupied. When the inhibitor is released, the catalyst is again available for reaction. Inhibition and catalyst poisoning: Inhibition should be distinguished from catalyst poisoning. An inhibitor only hinders the working of a catalyst without changing it, whilst in catalyst poisoning the catalyst undergoes a chemical reaction that is irreversible in the environment in question (the active catalyst may only be regained by a separate process). Potency: Index inhibitors or simplified as inhibitor predictably inhibit metabolism via a given pathway and are commonly used in prospective clinical drug-drug interaction studies.Inhibitors of CYP can be classified by their potency, such as: Strong inhibitor being one that causes at least a 5-fold increase in the plasma AUC values, or more than 80% decrease in clearance of substrates ( Over 1.8 times slower than usual clearance rate.) . Potency: Moderate inhibitor being one that causes at least a 2-fold increase in the plasma AUC values, or 50-80% decrease in clearance of substrates ( At least 1.5-1.8 times slower than usual clearance speed.) . Weak inhibitor being one that causes at least a 1.25-fold but less than 2-fold increase in the plasma AUC values, or 20-50% decrease in clearance of substrates ( At least 1.2-1.5 times slower than usual clearance rate.) .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flexible barge** Flexible barge: A flexible barge is a non-rigid barge usually made of fabric. History: This test ended on April 29 when the fabric of one of the two bags under tow developed a tear. There are various reasons why it has been difficult to gain support for demonstrating the viability of waterbag technology in California and around the world. A novel, Water, War, and Peace, has been completed that details the solutions waterbag technology offers to the complex political problems surrounding water issues throughout the Middle East, the United States, and the world. History: The Norwegian company Nordic Water Supply developed a 10.800 m3 bag in 1997 under an agreement with the Turkish government to transport freshwater to Northern Cyprus. Within two years at least 7 million m3 of water had to be delivered annually at a cost of €2.7M per year, with volumes growing over time, but the actual transport only amounted to 4 million m3 in four years and the contract was discontinued by Turkish authorities. As a result, NWS went out of business and was de-listed from the Oslo Stock Exchange in 2003. NSW's waterbag technology was acquired by the Monohakobi Institute of Technology in Japan. History: The REFRESH waterbag was developed by a consortium of companies and research institutes from Greece, Spain, Italy, Turkey and the Czech Republic within two European FP7 projects, REFRESH (running from 2010 to 2012) and the follow-up XXL-REFRESH (running from 2013 to 2015). The first project was focused on validation of the concept of modular waterbag; it developed a small scale prototype of 200 m3 capacity, tested in Greece in 2012. The second project was focused on scale-up and partial redesign of the REFRESH system. At the end of the second project the REFRESH waterbag concept reached commercial scale and a 2500 m3 system made of five 500m3 modules was tested offshore Spain in 2015. The length of the waterbag was 60 m long. History: The REFRESH concept is different from concepts of waterbag proposed earlier, based on huge monolithic containers (as the one proposed by Nordic Water Supply) or "trains" of smaller containers each one being sealed in itself (as the Spragg bag). The REFRESH waterbag is made of a series of modules, each one being a cylinder open at both bases, joined by watertight zippers. This makes it possible to perform all "dry" operations on ground at the level of single modules, overcoming the handling problems of monolithic containers and improves the behaviour in navigation with respect to the "trains" of connected bags. Technology: Zipper Zippers play an important part in extending the capacity of the waterbag beyond what is practically achievable by a single textile piece. The Spragg and REFRESH concepts both feature prominently zippers, albeit with a fundamental difference in its function. In the REFRESH design, the container itself is assembled on shore starting from planar cuts of fabric. Zippers run all along the perimeter of the fabric and make it possible to join an indefinite number of modules. Since each module is not closed by itself, zippers need to be watertight in order to ensure that no seawater leaks in. Invention: The greater the volume of water that can be delivered per trip, the better the economics.The REFRESH scheme is enabled by a specialty zipper, again developed by Ziplast, that uses a completely different tooth engagement design able to keep the strength of the original "Spragg" zipper while adding watertightness. Tests performed by the Spanish research centre AIMPLAS have confirmed that the zipper is able to stay watertight even when in tension. Applications: Israeli President Shimon Peres has written a letter in support of implementing a demonstration of Spragg Bag technology in the Mediterranean Sea as a tool for helping to bring Peace to the Middle East. In this letter President Peres states, "The draft of WATER, WAR AND PEACE written as a novel is in my view an original approach to highlight this grave problem and its solutions, that will pave the path to a better and more peaceful region. Your efforts to embark on a demonstration voyage to enlighten us all, both regarding the technological viability as well as cost, will surely contribute to meet the critical dilemma." This view is shared by the REFRESH consortium.Waterbags have been proposed to be used for emergency purposes in order to link the Gulf Cooperation Counsel countries' desalination plants all along the Persian Gulf coast. Applications: could be used to move water through the Sacramento River Delta following an earthquake and a catastrophic levee collapse that could cut off Southern California's water supply for up to two years or more. Sources: Barlow, Maude, Blue Gold: The Battle Against Corporate Theft of the World's Water, Earthscan, 2003, ISBN 1-84407-024-7 Fridell, Ron, Protecting Earth's Water Supply, Lerner Publications, 2008, ISBN 0-8225-7557-4 Gleick, Peter H.; The world's water: the biennial report on freshwater resources, Volume 1998, pp. 203-205, Spragg Waterbags, ISBN 1-55963-592-4 Lawrence Journal-World – April 27, 1996; Giant water bags proposed to quench a dry planet's thirst McCabe, Michael, San Francisco Chronicle, August 6, 1999; Full of Holes, or in the Bag Snitow, Alan, Thirst: fighting the corporate theft of our water, Publisher John Wiley and Sons, 2007, ISBN 0-7879-8458-2 Westneat, Danny, The San Diego Union – Tribune, San Diego, Calif.:Apr 28, 1996. p. A-3, He hopes water-bag idea will float. 'Fabric pipeline' could slake thirst worldwide, [1,2 Edition]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Screaming jelly babies** Screaming jelly babies: "Screaming Jelly Babies" (British English), also known as "Growling Gummy Bears" (American and Canadian English), is one classroom chemistry demonstration variants of which are practised in schools around the world. It is often used at open evenings to demonstrate the more light-hearted side of secondary school science.The experiment shows the amount of energy there is in one piece of confectionery; jelly babies, or gummy bears, are often used for theatrics. Potassium chlorate, a strong oxidising agent rapidly oxidises the sugar in the candy causing it to burst into flames producing a "screaming" sound as rapidly expanding gases are emitted from the test tube. The aroma of candy floss (cotton candy) is also given off.Researchers in Japan developed a new headset in December 2011 that triggers different sounds as wearers close their jaws when eating which included the "heart breaking" squeals of masticated jelly babies. Other carbohydrate or hydrocarbon containing substances can (also) be dropped into test tubes of molten chlorate, with similar results.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ASCII Express** ASCII Express: ASCII Express is a telecommunications program, written for the Apple II series of computers. During the 1980s, when the use of bulletin board systems (BBS) and telecommunications in general were not as widespread as they are today, ASCII Express (or "AE" for short) was the program of choice for many telecommunication users. ASCII Express II: The first version of AE, known as ASCII Express II, was created by Bill Blue in 1980 and distributed by Southwestern Data Systems. AE II can be used on any Apple II that has DOS 3.x and one of a small number of modems available at the time, such as the Hayes Micromodem II. This version of the program was mostly used by telecommunicators to access paid BBSs like THE SOURCE and CompuServe, as well as free BBSs. The interface of AE II is menu-driven, with very few of the features that are now expected of a modern telecom program, such as terminal emulation and multi-file transfer protocols like YMODEM and ZMODEM. ASCII Express The Professional: By 1982, ASCII Express II ceased development, and was replaced by a totally re-written replacement called ASCII Express "The Professional", also known as "ASCII Express Professional" or its much shorter name "AE Pro". This version was a collaboration between Bill Blue and Mark Robbins. AE Pro was a command-line driven telecomm program packed with many features lacking in its predecessor, including scripting, YMODEM and ZMODEM, terminal emulation, and support for Apple ProDOS 8. AE Pro can also be used as a pseudo-BBS when configured as a host, allowing a user to dial-in and exchange files. This type of system was coined the name AE line. ASCII Express The Professional: Earlier versions of AE Pro were distributed by Roger Wagner of Southwestern Data Systems, and later by United Software Industries (founded by Mark Robbins, Bill Blue and others). Greg Schaefer converted AE Pro from Apple DOS 3.3 to Apple ProDOS in an afternoon and received US$5000 for his efforts. ASCII Express The Professional: In 1984 Bill Blue and Joe Holt ported AE Pro to MS-DOS and 8086 assembly language. In 1985 Joe Holt and Greg Schaefer rewrote AE Pro for the Apple II taking advantage of the platform's new mouse and MouseText features. It also featured advanced scripting and a full-featured mouse-based text editor. This product was released as MouseTalk. AE Pro and MouseTalk were soon overshadowed by ProTERM, a telecom product that utilizes many of the advanced features of the Apple IIe and IIc, such as 65C02 opcodes, use of the mouse, and macros. Peer to peer file sharing: The early 1980s was the period when modding was becoming very active throughout the world. Hundreds of Apple II-based BBSs popped up, most of them used only as message boards. With the aid of free Apple II hacking software like Dalton's Disk Disintegrator (DDD), computer users were able to take an un-protected floppy disk, compress it into multiple files, then transmit those files to another user. This was actually one of several origins of what is known today as peer-to-peer file transfers. Peer to peer file sharing: While other Apple II-based telecom programs, such as DiskFur and CatFur, allowed for complete disk and file transfers, there was a need for a portal concept - one that is hosted using a BBS as its entry point. This way, a community including software enthusiasts and those who trade in unlicensed software could collaborate as well as exchange software. Peer to peer file sharing: AE Pro was at the time the only telecom program that was accessible, via an undocumented hack, from virtually any other BBS software, such as GBBS, Networks II, among other programs. This allowed for sysops to control access to the AE lines via user accounts. With many of the users phreaking their way into AE lines, these portals allowed for international warez communities to develop. AE knock-offs were also developed, including PAE (Pseudo Ascii Express--"Written by a Pirate for Pirates") and PAE ProDOS, both written as free add-ons to GBBS. Unlike AE, the source code was freely available for these add-ons. A popular MS DOS-based BBS Celerity BBS from the 1990's had a "CAE" (Celerity Ascii Express) mode which dropped a caller into a no-user-record file transfer system. Reception: II Computing listed ASCII Express Professional tenth on the magazine's list of top Apple II non-game, non-educational software as of late 1985, based on sales and market-share data.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cavum Vergae** Cavum Vergae: The cavum Vergae is a posterior extension of the cavum septi pellucidi, an anomaly that is found in a small percentage of human brains. It was first described by Andrea Verga.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDC Cyber** CDC Cyber: The CDC Cyber range of mainframe-class supercomputers were the primary products of Control Data Corporation (CDC) during the 1970s and 1980s. In their day, they were the computer architecture of choice for scientific and mathematically intensive computing. They were used for modeling fluid flow, material science stress analysis, electrochemical machining analysis, probabilistic analysis, energy and academic computing, radiation shielding modeling, and other applications. The lineup also included the Cyber 18 and Cyber 1000 minicomputers. Like their predecessor, the CDC 6600, they were unusual in using the ones' complement binary representation. Models: The Cyber line included five different series of computers: The 70 and 170 series based on the architecture of the CDC 6600 and CDC 7600 supercomputers, respectively The 200 series based on the CDC STAR-100 - released in the 1970s. Models: The 180 series developed by a team in Canada - released in the 1980s (after the 200 series) The Cyberplus or Advanced Flexible Processor (AFP) The Cyber 18 minicomputer based on the CDC 1700Primarily aimed at large office applications instead of the traditional supercomputer tasks, some of the Cyber machines nevertheless included basic vector instructions for added performance in traditional CDC roles. Models: Cyber 70 and 170 series The Cyber 70 and 170 architectures were successors to the earlier CDC 6600 and CDC 7600 series and therefore shared almost all of the earlier architecture's characteristics. The Cyber-70 series is a minor upgrade from the earlier systems. The Cyber-73 was largely the same hardware as the CDC 6400 - with the addition of a Compare and Move Unit (CMU). The CMU instructions speeded up comparison and moving of non-word aligned 6-bit character data. The Cyber-73 could be configured with either one or two CPUs. The dual CPU version replaced the CDC 6500. As was the case with the CDC 6200, CDC also offered a Cyber-72. The Cyber-72 had identical hardware to a Cyber-73, but added additional clock cycles to each instruction to slow it down. This allowed CDC to offer a lower performance version at a lower price point without the need to develop new hardware. It could also be delivered with dual CPUs. The Cyber 74 was an updated version of the CDC 6600. The Cyber 76 was essentially a renamed CDC 7600. Neither the Cyber-74 nor the Cyber-76 had CMU instructions. Models: The Cyber-170 series represented CDCs move from discrete electronic components and core memory to integrated circuits and semiconductor memory. The 172, 173, and 174 use integrated circuits and semiconductor memory whereas the 175 uses high-speed discrete transistors. The Cyber-170/700 series is a late-1970s refresh of the Cyber-170 line. Models: The central processor (CPU) and central memory (CM) operated in units of 60-bit words. In CDC lingo, the term "byte" referred to 12-bit entities (which coincided with the word size used by the peripheral processors). Characters were six bits, operation codes were six bits, and central memory addresses were 18 bits. Central processor instructions were either 15 bits or 30 bits. Models: The 18-bit addressing inherent to the Cyber 170 series imposed a limit of 262,144 (256K) words of main memory, which is semiconductor memory in this series. The central processor has no I/O instructions, relying upon the peripheral processor (PP) units to do I/O. Models: A Cyber 170-series system consists of one or two CPUs that run at either 25 or 40 MHz, and is equipped with 10, 14, 17, or 20 peripheral processors (PP), and up to 24 high-performance channels for high-speed I/O. Due to the relatively slow memory reference times of the CPU (in some models, memory reference instructions were slower than floating-point divides), the higher-end CPUs (e.g., Cyber-74, Cyber-76, Cyber-175, and Cyber-176) are equipped with eight or twelve words of high-speed memory used as an instruction cache. Any loop that fit into the cache (which is usually called in-stack) runs very fast, without referencing main memory for instruction fetch. The lower-end models do not contain an instruction stack. However, since up to four instructions are packed into each 60-bit word, some degree of prefetching is inherent in the design. Models: As with predecessor systems, the Cyber 170 series has eight 18-bit address registers (A0 through A7), eight 18-bit index registers (B0 through B7), and eight 60-bit operand registers (X0 through X7). Seven of the A registers are tied to their corresponding X register. Setting A1 through A5 reads that address and fetches it into the corresponding X1 through X5 register. Likewise, setting register A6 or A7 writes the corresponding X6 or X7 register to central memory at the address written to the A register. A0 is effectively a scratch register. Models: The higher-end CPUs consisted of multiple functional units (e.g., shift, increment, floating add) which allowed some degree of parallel execution of instructions. This parallelism allows assembly programmers to minimize the effects of the system's slow memory fetch time by pre-fetching data from central memory well before that data is needed. By interleaving independent instructions between the memory fetch instruction and the instructions manipulating the fetched operand, the time occupied by the memory fetch can be used for other computation. With this technique, coupled with the handcrafting of tight loops that fit within the instruction stack, a skilled Cyber assembly programmer can write extremely efficient code that makes the most of the power of the hardware. Models: The peripheral processor subsystem uses a technique known as barrel and slot to share the execution unit; each PP had its own memory and registers, but the processor (the slot) itself executed one instruction from each PP in turn (the barrel). This is a crude form of hardware multiprogramming. The peripheral processors have 4096 bytes of 12-bit memory words and an 18-bit accumulator register. Each PP has access to all I/O channels and all of the system's central memory (CM) in addition to the PP's own memory. The PP instruction set lacks, for example, extensive arithmetic capabilities and does not run user code; the peripheral processor subsystem's purpose is to process I/O and thereby free the more powerful central processor unit(s) to running user computations. Models: A feature of the lower Cyber CPUs is the Compare Move Unit (CMU). It provides four additional instructions intended to aid text processing applications. In an unusual departure from the rest of the 15- and 30-bit instructions, these are 60-bit instructions (three actually use all 60 bits, the other use 30 bits, but its alignment requires 60 bits to be used). The instructions are: move a short string, move a long string, compare strings, and compare a collated string. They operate on six-bit fields (numbered 1 through 10) in central memory. For example, a single instruction can specify "move the 72 character string starting at word 1000 character 3 to location 2000 character 9". The CMU hardware is not included in the higher-end Cyber CPUs, because hand coded loops could run as fast or faster than the CMU instructions. Models: Later systems typically run CDC's NOS (Network Operating System). Version 1 of NOS continued to be updated until about 1981; NOS version 2 was released early 1982. Besides NOS, the only other operating systems commonly used on the 170 series was NOS/BE or its predecessor SCOPE, a product of CDC's Sunnyvale division. These operating systems provide time-sharing of batch and interactive applications. The predecessor to NOS was Kronos which was in common use up until 1975 or so. Due to the strong dependency of developed applications on the particular installation's character set, many installations chose to run the older operating systems rather than convert their applications. Other installations would patch newer versions of the operating system to use the older character set to maintain application compatibility. Models: Cyber 180 series Cyber 180 development began in the Advanced Systems Laboratory, a joint CDC/NCR development venture started in 1973 and located in Escondido, California. The machine family was originally called Integrated Product Line (IPL) and was intended to be a virtual memory replacement for the NCR 6150 and CDC Cyber 70 product lines. The IPL system was also called the Cyber 80 in development documents. The Software Writer's Language (SWL), a high-level Pascal-like language, was developed for the project with the intent that all languages and the operating system (IPLOS) were going to be written in SWL. SWL was later renamed PASCAL-X and eventually became Cybil. The joint venture was abandoned in 1976, with CDC continuing system development and renaming the Cyber 80 as Cyber 180. The first machines of the series were announced in 1982 and the product announcement for the NOS/VE operating system occurred in 1983. Models: As the computing world standardized to an eight-bit byte size, CDC customers started pushing for the Cyber machines to do the same. The result was a new series of systems that could operate in both 60- and 64-bit modes. The 64-bit operating system was called NOS/VE, and supported the virtual memory capabilities of the hardware. The older 60-bit operating systems, NOS and NOS/BE, could run in a special address space for compatibility with the older systems. Models: The true 180-mode machines are microcoded processors that can support both instruction sets simultaneously. Their hardware is completely different from the earlier 6000/70/170 machines. The small 170-mode exchange package was mapped into the much larger 180-mode exchange package; within the 180-mode exchange package, there is a virtual machine identifier (VMID) that determines whether the 8/16/64-bit two's complement 180 instruction set or the 12/60-bit one's complement 170 instruction set is executed. Models: There were three true 180s in the initial lineup, codenamed P1, P2, P3. P2 and P3 were larger water-cooled designs. The P2 was designed in Mississauga, Ontario, by the same team that later designed the smaller P1, and the P3 was designed in Arden Hills, Minnesota. The P1 was a novel air-cooled, 60-board cabinet designed by a group in Mississauga; the P1 ran on 60 Hz current (no motor-generator sets needed). A fourth high-end 180 model 990 (codenamed THETA) was also under development in Arden Hills. Models: The 180s were initially marketed as 170/8xx machines with no mention of the new 8/64-bit system inside. However, the primary control program is a 180-mode program known as Environmental Interface (EI). The 170 operating system (NOS) used a single, large, fixed page within the main memory. There were a few clues that an alert user could pick up on, such as the "building page tables" message that flashed on the operator's console at startup and deadstart panels with 16 (instead of 12) toggle switches per PP word on the P2 and P3. Models: The peripheral processors in the true 180s are always 16-bit machines with the sign bit determining whether a 16/64 bit or 12/60 bit PP instruction is being executed. The single word I/O instructions in the PPs are always 16-bit instructions, so at deadstart the PPs can set up the proper environment to run both EI plus NOS and the customer's existing 170-mode software. To hide this process from the customer, earlier in the 1980s CDC had ceased distribution of the source code for its Deadstart Diagnostic Sequence (DDS) package and turned it into the proprietary Common Tests & Initialization (CTI) package. Models: The initial 170/800 lineup was: 170/825 (P1), 170/835 (P2), 170/855 (P3), 170/865 and 170/875. The 825 was released initially after some delay loops had been added to its microcode; it seemed the design folks in Toronto had done a little too well and it was too close to the P2 in performance. The 865 and 875 models were revamped 170/760 heads (one or two processors with 6600/7600-style parallel functional units) with larger memories. The 865 used normal 170 memory; the 875 took its faster main processor memory from the Cyber 205 line. Models: A year or two after the initial release, CDC announced the 800-series' true capabilities to its customers, and the true 180s were relabeled as the 180/825 (P1), 180/835 (P2), and 180/855 (P3). At some point, the model 815 was introduced with the delayed microcode and the faster microcode was restored to the model 825. Eventually the THETA was released as the Cyber 990. Models: Cyber 200 series In 1974, CDC introduced the STAR architecture. The STAR is an entirely new 64-bit design with virtual memory and vector processing instructions added for high performance on a certain class of math tasks. The STAR's vector pipeline is a memory to memory pipe, which supports vector lengths of up to 65,536 elements. The latencies of the vector pipeline are very long, so peak speed is approached only when very long vectors are used. The scalar processor was deliberately simplified to provide room for the vector processor and is relatively slow in comparison to the CDC 7600. As such, the original STAR proved to be a great disappointment when it was released (see Amdahl's Law). Best estimates claim that three STAR-100 systems were delivered. Models: It appeared that all of the problems in the STAR were solvable. In the late 1970s, CDC addressed some of these issues with the Cyber 203. The new name kept with their new branding, and perhaps to distance itself from the STAR's failure. The Cyber 203 contains redesigned scalar processing and loosely coupled I/O design, but retains the STAR's vector pipeline. Best estimates claim that two Cyber 203s were delivered or upgraded from STAR-100s. Models: In 1980, the successor to the Cyber 203, the Cyber 205 was announced. The UK Meteorological Office at Bracknell, England was the first customer and they received their Cyber 205 in 1981. The Cyber 205 replaces the STAR vector pipeline with redesigned vector pipelines: both scalar and vector units utilized ECL gate array ICs and are cooled with Freon. Cyber 205 systems were available with two or four vector pipelines, with the four-pipe version theoretically delivering 400 64-bit MFLOPs and 800 32-bit MFLOPs. These speeds are rarely seen in practice other than by handcrafted assembly language. The ECL gate array ICs contain 168 logic gates each, with the clock tree networks being tuned by hand-crafted coax length adjustment. The instruction set would be considered V-CISC (very complex instruction set) among modern processors. Many specialized operations facilitate hardware searches, matrix mathematics, and special instructions that enable decryption. Models: The original Cyber 205 was renamed to Cyber 205 Series 400 in 1983 when the Cyber 205 Series 600 was introduced. The Series 600 differs in memory technology and packaging but is otherwise the same. A single four-pipe Cyber 205 was installed. All other sites appear to be two-pipe installations with final count to be determined. The Cyber 205 architecture evolved into the ETA10 as the design team spun off into ETA Systems in September 1983. A final development was the Cyber 250, which was scheduled for release in 1987 priced at $20 million; it was later renamed the ETA30 after ETA Systems was absorbed back into CDC. Models: CDC CYBER 205 Architecture: ECL/LSI logic20 ns cycle time (or 50 Mhz) Up to 800 Mflops FP32 ans 400 Mflops FP64 1, 2, 4, 8 or 16 million 64-bit words with 25.6 or 51.2 Gigabits/second 8 I/O ports with up to 16 200 Mbits/second each Cyberplus or Advanced Flexible Processor (AFP) Each Cyberplus (aka Advanced Flexible Processor, AFP) is a 16-bit processor with optional 64-bit floating point capabilities and has 256 K or 512 K words of 64-bit memory. The AFP was the successor to the Flexible Processor (FP), whose design development started in 1972 under black-project circumstances targeted at processing radar and photo image data. The FP control unit had a hardware network for conditional microinstruction execution, with four mask registers and a condition-hold register; three bits in the microinstruction format select among nearly 50 conditions for determining execution, including result sign and overflow, I/O conditions, and loop control.At least 21 Cyberplus multiprocessor installations were operational in 1986. These parallel processing systems include from 1 to 256 Cyberplus processors providing 250 MFLOPS each, which are connected to an existing Cyber system via a direct memory interconnect architecture (MIA), this was available on NOS 2.2 for the Cyber 170/835, 845, 855 and 180/990 models. Models: Physically, each Cyberplus processor unit was of typical mainframe module size, similar to the Cyber 180 systems, with the exact width dependent on whether the optional FPU was installed, and weighed approximately 1 tonne. Software that was bundled with the Cyberplus was System software FORTRAN cross compiler MICA (Machine Instruction Cross Assembler) Load File Builder Utility ECHOS (simulator) Debug facility Dump utility Dump analyzer utility Maintenance softwareSome sites using the Cyberplus were the University of Georgia and the Gesellschaft für Trendanalysen (GfTA) (Association for Trend Analyses) in Germany. A fully configured 256 processor Cyberplus system would have a theoretical performance of 64 GFLOPS, and weigh around 256 tonnes. A nine-unit system was reputedly capable of performing comparative analysis (including pre-processing convolutions) on 1 megapixel images at a rate of one image pair per second. Models: Cyber 18 The Cyber 18 is a 16-bit minicomputer which was a successor to the CDC 1700 minicomputer. It was mostly used in real-time environments. One noteworthy application is as the basis of the 2550—a communications processor used by CDC 6000 series and Cyber 70/Cyber 170 mainframes. The 2550 was a product of CDC's Communications Systems Division, in Santa Ana, California (STAOPS). STAOPS also produced another communication processor (CP), used in networks hosted by IBM mainframes. This M1000 CP, later renamed C1000, came from an acquisition of Marshall MDM Communications. A three-board set was added to the Cyber 18 to create the 2550. Models: The Cyber 18 was generally programmed in Pascal and assembly language; FORTRAN, BASIC, and RPG II were also available. Operating systems included RTOS (Real-Time Operating System), MSOS 5 (Mass Storage Operating System), and TIMESHARE 3 (time-sharing system). Models: "Cyber 18-17" was just a new name for the System 17, based on the 1784 processor. Other Cyber 18s (Cyber 18-05, 18-10, 18-20, and 18-30) had microprogrammable processors with up to 128K words of memory, four additional general registers, and an enhanced instruction set. The Cyber 18-30 had dual processors. A special version of the Cyber 18, known as the MP32, that was 32-bit instead of 16-bit was created for the National Security Agency for crypto-analysis work. The MP32 had the Fortran math runtime library package built into its microcode. The Soviet Union tried to buy several of these systems and they were being built when the U.S. Government cancelled the order. The parts for the MP32 were absorbed into the Cyber 18 production. One of the uses of the Cyber 18 was monitoring the Alaskan Pipeline. Models: Cyber 1000 The M1000 / C1000, later renamed Cyber 1000, was used as a message store and forward system used by the Federal Reserve System. A version of the Cyber 1000 with its hard drive removed was used by Bell Telephone. This was a RISC processor (reduced instruction set computer). An improved version known as the Cyber 1000-2 with the Line Termination Sub-System added 256 Zilog Z80 microprocessors. The Bell Operating Companies purchased large numbers of these systems in the mid-to-late 1980s for data communications. In the late 1980s the XN10 was released with an improved processor (a direct memory access instruction was added) as well as a size reduction from two cabinets to one. The XN20 was an improved version of the XN10 with a much smaller footprint. The Line Termination Sub-System was redesigned to use the improved Z180 microprocessor (the Buffer Controller card, Programmable Line Controller card and two Communication Line Interface cards were incorporated on to a single card). The XN20 was in pre-production stage when the Communication Systems Division was shut down in 1992. Models: Jack Ralph was the chief architect of the Cyber 1000-2, XN-10 and XN-20 systems. Dan Nay was the chief engineer of the XN-20. Cyber 2000
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corner tower** Corner tower: The corner towers were defensive towers built at the corners of castles or fortresses. Purpose: Two ideas have been advanced about the purpose or value of corner towers in medieval fortresses: The corners of a medieval fortress were weak points because they were easier to attack and more difficult to defend than the rest of the walls. Not only this, but enemy combatants that reached the tops of walls at the corners were protected at the point where the walls met, making it more difficult to repulse them. Fortress corner towers were therefore constructed to make up for this vulnerability.These towers made possible to provide enfilade fire against attacking forces along adjacent walls. This would oblige attackers to concentrate some of their force on the corner towers themselves where they could be dealt with more effectively.Towers constructed at fortress corners were larger and taller than other towers. At the bottom of these towers were defences, such as ditches, fences, and sometimes advanced forts or bastions. Corner towers may be seen on the Wall of Philip II Augustus (Tour du coin (Louvre), Tour de Nesle, Tour Barbeau), on the Wall of Charles V (Tour du bois), in the city of Carcassonne, in the Château de Pierrefonds and in the fortress of the Bastille. In Architecture: In architecture of non-defensive structures, like churches and theater buildings, a corner tower is any tower that is protruding upwards from the corner of two walls, and usually has no walls of its own below the roof. While other towers are usually attached to the building by one wall.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Onsen** Onsen: In Japan, onsen (温泉) are hot springs and the bathing facilities and traditional inns around them. There are approximately 25,000 hot spring sources throughout Japan, and approximately 3,000 onsen establishments use naturally hot water from these geothermally heated springs. Onsen: Onsen may be either outdoor baths (露天風呂 or 野天風呂, roten-buro / noten-buro) or indoor baths (内湯, uchiyu). Traditionally, onsen were located outdoors, although many inns have now built indoor bathing facilities as well. Nowadays, as most households have their own baths, the number of traditional public baths has decreased, but the popularity of sightseeing hot spring towns has increased. Baths may be either publicly run by a municipality or privately, often connecting to a lodging establishment such as a hotel, ryokan, or minshuku. The presence of an onsen is often indicated on signs and maps by the symbol ♨, the kanji 湯 (yu, meaning "hot water"), or the simpler phonetic hiragana character ゆ (yu). Definition: According to the Japanese Hot Springs Act (温泉法, Onsen Hō), onsen is defined as "hot water, mineral water, and water vapor or other gas (excluding natural gas of which the principal component is hydrocarbon) gushing from underground". The law states that mineralized hot spring water that feeds an onsen must be at least 24 °C/ 75 °F originating at a depth of at least 1.5 kilometers, and contain specified amounts of minerals such as sulphur, sodium, iron, or magnesium.When onsen water contains distinctive minerals or chemicals, establishments often display what type of water it is, in part because the specific minerals found in the water have been thought to provide health benefits. Types include sulfur onsen (硫黄泉, iō-sen), sodium chloride onsen (ナトリウム泉, natoriumu-sen), hydrogen carbonate onsen (炭酸泉, tansan-sen), and iron onsen (鉄泉, tetsu-sen). Mixed bathing: Traditionally, men and women bathed together at both onsen and sentō communal bathhouses, but gender separation has been enforced at most institutions since the opening of Japan to the West during the Meiji Restoration. Mixed bathing (混浴, kon'yoku) persists at some onsen in rural areas of Japan, which usually also provide the option of separate "women-only" baths or different hours for the two sexes. Children are usually not limited by these rules. Mixed bathing: In some prefectures of Japan, including Tokyo, where nude mixed bathing is banned, people in mixed baths are required to wear swimsuits or yugi (湯着), which are specifically designed for bathing. Etiquette: Ensuring cleanliness As at a sentō, at an onsen, all guests are expected to wash and rinse themselves thoroughly before entering the hot water. Bathing stations are equipped with stools, faucets, wooden buckets, and toiletries such as soap and shampoo; nearly all onsen also provide removable shower heads for bathing convenience. Entering the onsen while still dirty or with traces of soap on the body is socially unacceptable. Etiquette: Swimsuits Guests are not normally allowed to wear swimsuits in the baths. However, some modern onsen will require their guests to wear a swimming suit in their mixed baths. Etiquette: Towel Onsen guests generally bring a small towel with them to use as a wash cloth. The towel can also provide a modicum of modesty when walking between the washing area and the baths. Some onsen allow one to wear the towel into the baths, while others have posted signs prohibiting this, saying that it makes it harder to clean the bath. It is against the rules to immerse or dip towels in the onsen bath water, since this can be considered unclean. People normally set their towels off to the side of the water when enjoying the baths, or place their folded towels on top of their heads. Etiquette: Tattoos By 2015, around half (56%) of onsen operators had banned bathers with tattoos from using their facilities. The original reason for the tattoo ban was to keep out Yakuza and members of other crime gangs who traditionally have elaborate full-body decoration.However, tattoo-friendly onsen do exist. A 2015 study by the Japan National Tourism Organisation found that more than 30% of onsen operators at hotels and inns across the country will not turn someone with a tattoo away; another 13% said they would grant access to a tattooed guest under certain conditions, such as having the tattoo covered up. Some towns have many tattoo-friendly onsen that do not require guests to cover them up. Two such towns are Kinosaki Onsen in Hyōgo and Beppu Onsen in Ōita.With the increase in foreign customers due to growing tourism, some onsen that previously banned tattoos are loosening their rules to allow guests with small tattoos to enter, provided they cover their tattoos with a patch or sticking plaster. Risks: Article 18, paragraph 1 of the Japanese Hot Springs Act publishes guidance on contraindications and cautions for bathing in hot springs, and drinking their respective waters. Although millions of Japanese bathe in onsen every year with few noticeable side effects, there are still potential side effects to onsen usage, such as aggravating high blood pressure or heart disease.Legionella bacteria have been found in some onsen with poor sanitation. For example, 295 people were infected with Legionella and 7 died at an onsen in Miyazaki Prefecture in 2002. Revelations of poor sanitary practices at some onsen have led to improved regulation by hot-spring communities to maintain their reputation.There have been reports of infectious disease found in hot bodies of water worldwide, such as various Naegleria species. While studies have found the presence of Naegleria in hot spring waters, Naegleria fowleri, responsible for numerous fatal cases of primary amoebic meningoencephalitis around the world, has not been found to be present in the water at onsen. Nevertheless, fewer than five cases have been seen historically in Japan, although not conclusively linked to onsen exposure.Many onsen display notices reminding anyone with open cuts, sores, or lesions not to bathe. Additionally, in recent years onsen are increasingly adding chlorine to their waters to prevent infection, although many onsen purists seek natural, unchlorinated onsen that do not recycle their water but instead clean the baths daily. These precautions as well as proper onsen usage (i.e. not placing the head underwater, washing thoroughly before entering the bath) greatly reduce any overall risk to bathers. Risks: Voyeurism is reported at some onsen. This is mitigated in some prefectures of Japan where nude mixed bathing is not permitted, visitors must wear swimsuits. Selected onsen: Akagi, Gunma Akayu, Yamagata Arima Onsen, Kobe, Hyōgo Asamushi Onsen, Aomori Prefecture Aso, Kumamoto, a famous onsen area alongside Mount Aso, an active volcano Atami Onsen, Atami, Shizuoka, major onsen resort town near Tokyo Awara Onsen, Awara, Fukui Prefecture Awazu Onsen, Komatsu, Ishikawa Beppu Onsen, Beppu, Ōita Prefecture, famous for its multi-coloured baths Dake Onsen, Nihonmatsu, Fukushima Dōgo Onsen, Ehime Prefecture Funaoka Onsen, Kyoto Gero Onsen, Gero, Gifu, famous for its free open bath on riverbank of Hida River Geto Onsen, Iwate Prefecture Ginzan Onsen, Obanazawa, Yamagata Hakone, Kanagawa, famous onsen resort town near Tokyo Hanamaki, Iwate Hirayu Onsen, Takayama, Gifu Hokkawa Onsen, Shizuoka Ibusuki Onsen, Kagoshima Prefecture Iizaka Onsen, Fukushima Ikaho Onsen, Ikaho, Gunma Itō, Shizuoka Iwaki Yumoto Onsen, Fukushima Prefecture Iwamuro, Niigata, famous for onsen since the Edo period Jigokudani, Nagano Prefecture Jōzankei Onsen, Hokkaido Kaike Onsen, Yonago, Tottori Kakeyu Onsen, Nagano Kanzanji Onsen, Shizuoka Katayamazu Onsen, Kaga, Ishikawa Kawayu Onsen, Tanabe, Wakayama Kindaichi Onsen, Iwate Kinosaki, Hyōgo Kinugawa Onsen, Tochigi Kusatsu Onsen, Gunma Prefecture Misasa Onsen, Misasa, Tottori Prefecture Nagaragawa Onsen, Gifu, Gifu Nanki-Katsuura Onsen, Nachikatsuura, Wakayama Nanki-Shirahama Onsen, Shirahama, Wakayama Prefecture Naoshima, Kagawa Prefecture Naruko, Miyagi Noboribetsu, Hokkaido Nuruyu Onsen, Kumamoto Prefecture Nyūtō Onsen, Akita Prefecture Obama Onsen, Nagasaki Prefecture, the hottest Japanese hot spring (105 °C or 221 °F) Onneyu Onsen, Hokkaido Ōfuka Onsen, Akita Ryujin Onsen, Tanabe, Wakayama, one of Japan's famous three beautifying onsen Sabakoyu Onsen, Fukushima Prefecture, the oldest community onsen in Japan Sakunami Onsen, Miyagi Sawatari, Gunma Prefecture Senami Onsen, Niigata Prefecture Shima Onsen, Gunma Prefecture Shimabara, Nagasaki Shimobe Onsen, Yamanashi Prefecture Shiobara Onsen, Tochigi Prefecture Shuzenji Onsen, Shizuoka Prefecture Sōunkyo Onsen, Hokkaido Sukayu Onsen, Aomori Prefecture Sumatakyō Onsen, Shizuoka Prefecture Suwa, Nagano Prefecture Takanoyu Onsen, Akita Prefecture Takaragawa, Gunma, one of the largest outdoor mixed baths in Japan Takarazuka, Hyōgo Tara, Saga Tōyako, Hokkaidō Tsubame Onsen, Niigata - famous for its free open mixed onsen Tsuchiyu Onsen, Fukushima Prefecture Tsukioka Onsen, Niigata, Niigata Prefecture Tsurumaki Onsen, Kanagawa Unazuki Onsen, Kurobe, Toyama Prefecture Wakura Onsen, Nanao, Ishikawa Prefecture Yamanaka Onsen, Kaga, Ishikawa Yamashiro Onsen, Kaga, Ishikawa Yubara Onsen, Okayama Prefecture, one of the largest mixed baths at the foot of Yubara dam Yudanaka Onsen, Nagano Prefecture Yufuin, Ōita Prefecture Yugawara, Kanagawa Prefecture Yumura Onsen, (Shin'onsen, Hyōgo) Yunogo Onsen, Okayama Prefecture Yunokawa Onsen, Hokkaido Yunomine Onsen, Tanabe, Wakayama, site of the UNESCO World Heritage Tsuboyu bath Yuzawa, Niigata Zaō Onsen, Yamagata Prefecture
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-biological complex drugs** Non-biological complex drugs: Non-biological Complex Drugs (NBCDs) are medical compounds that cannot be defined as small molecular, fully identifiable drugs with active pharmaceutical ingredients. They are highly complex and cannot be defined as biologicals as they are not derived from living materials. NBCDs are synthetic complex compounds and they contain non-homomolecular, closely related molecular structures with often nanoparticular properties. This is, for instance, the case with the iron sucrose and its similars. But also with other drug products, e.g. polypeptides (glatiramoids), swelling polymers, liposomes as the NBCD class is growing. Hence and due to their complexity and specific composition mix, such colloidal iron carbohydrate drugs cannot be fully identified, characterized, quantitated and/or described by physiochemical means to define their pharmaceutical properties. Therefore, contradictory to the generic paradigm pathway, relying on a full pharmaceutical identity and sameness in vitro evaluation exercise, they need additional (biological, in vivo) evaluation with a reference product to assess comparability e.g. in tissue targeting in the body. This requires an appropriate, yet to be defined and be harmonized regulatory approach for these new class of medicinal products. The profile and the performance of NBCDs is defined by the multi-step manufacturing process, which is laborious, difficult to control and not disclosed by intellectual property. Minimal changes in for instance the starting materials or the process conditions might result in significant clinical differences affecting therapeutic effects or safety. Clinical Data: Studies have shown differences in therapeutic and safety effects between originator NBCDs and approved similar, even though these compounds had shown high similarities in physicochemical character. The structures responsible for these therapeutic differences are unknown. The differences in efficiency and safety (in vivo profile) cannot be detected in in vitro testing, as it is impossible to isolate and fully characterize these compounds. Nor are there defined models for proper evaluation. Furthermore, it is unknown what to look for and what causes the differences, due to a lack of understanding of the exact composition. This calls for additional characterization in biological systems including clinical head-to-head analysis to define the extent of similarity and the place in therapy as therapeutic alternatives or interchangeable/substitutable medicinal products. Clear evidence for these observations comes from retrospective studies on iron sucrose and iron sucrose similars. Approval challenges for iron sucrose similars: NBCDs have been approved according to the classical generic paradigm based on pharmaceutical equivalence and bioequivalence without realizing the nano-properties of these type of medicinal compounds. Therefore, these compounds were seen as therapeutically equivalent. As mentioned above such an approach is not valid for follow-on versions of NBCDs. The classical generic approach disregards the complexity of NBCD compounds (pharmaceutical and biodistribution aspects) as they cannot be fully characterized in vitro, which is a prerequisite for the generic approach to predict pharmaceutical (quality) equivalence. The reason is the complexity and the non-homologous composition of these synthetic large molecular products. Even slight differences in manufacturing might result in therapeutic or safety differences not to be attributed to a known or defined component. Approval challenges for iron sucrose similars: The existing and defined biosimilar pathway, taking into consideration the complexity of biologics and its follow-on products, is neither applicable for NBCDs and its similars. Since NBCDs are by definition not biologicals but are rather synthetic. However, basic principles can be used for an NBCD similar evaluation. Since the NBCD follow-on versions are not identical but only similar to the originator product, they are never the same as it is with generic small molecule products. NBCDs and their similars containing nanoparticulate can also be referred to as nanosimilars (see figure 1). A stepwise quality, non-clinical and clinical approach is suggested for market approval of NBCD nanosimilars and to show comparability. There is a lack of non-clinical models to test such products like e.g. the rodent approach addressed by EMA in their reference paper for the NBCDs iron sucrose and its similars. Recently, the hatching egg model was used as an alternative model to study time-dependent iron concentrations in heart and liver avian tissues for various intravenous iron complexes applied in equimolar doses. Such models need in-depth evaluation and validation to demonstrate robustness and to further define potential use in an evaluation and comparison testing. When evaluated similar enough, the challenge then still is to define to either use them as a therapeutic alternative or as an equivalent product and if at the end the follow-on version can substitute the reference product or can be interchanged which also needs head-to-head comparisons in patients to prove therapeutic equivalence and comparable safety. Approval challenges for iron sucrose similars: Both the European Medicines Agency (EMA) and the US Food and Drug Administration (US FDA) have drafted reference papers and guidance’s for the industry for several types of NBCDs, e.g. for iron nanoparticles products. Also regulatory science initiatives have addressed lacking investigations. For the comparability exercise the question stays to evaluate the totality of evidence for enough similarity of such test drugs with the reference product to conclude of the extent of comparability and its impact on use. Currently, the FDA follows a case-by-case approach for the evaluation of NBCD follow-on products, which is iterative, adaptive and flexible but also more general. The EMA on the other hand is supporting a class-related approach including non-clinical testing. A harmonized approach is, however, still missing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jennifer Dunne** Jennifer Dunne: Jennifer Dunne is an American ecologist whose research focuses on the network structure of food webs. One of 14 scientists who led critical advances in food web research over the last century, according to the journal Food Webs, Dunne uses ecological network research to compare the varying ways humans interact with other species through space and time, providing a quantitative perspective on sustainability of socio-ecological systems. Education: Dunne attended Harvard University, where she earned an A.B. Cum Laude degree in philosophy, and received an M.A. in ecology and systematic biology from San Francisco State University. She earned her Ph.D. in Energy and Resources from the University of California, Berkeley in 2000 and was a National Science Foundation postdoctoral research fellow in biological informatics. Research and career: Dunne is recognized as a leader in ecological network research, having made significant contributions toward understanding the dynamics and function of ecological networks through modeling and analysis. Ecological networks capture the complex interactions among species that provide structure to biodiversity. She is the author of more than 70 scientific publications. In 1998, with co-author Neo Martinez, she published her first work on the roles of time, space, and other scales (e.g., species richness) in food web research. In 2002, she published highly cited articles on the network structure of food webs and on the robustness of trophic networks (food webs) in the face of biodiversity loss through extinction. Dunne and her co-authors have also published influential papers on the dynamics of adaptive feeding in ecological networks, cascading extinctions, paleo-ecological networks reconstructed from a 48-million-year old deposit of Messel shale, and networks reconstructed from the Chengjiang and Burgess Shale assemblages -- work which indicates that prehistoric food webs are very similar to modern webs in their network structures.In 2016, her team published the first highly-detailed food web that included humans (the Sanak Island Aleut) in a complex food web with other species. It suggested that the role Sanak Islanders played in their food web, as supergeneralists, had a stabilizing effect on the ecosystem.Her current and ongoing research extends the analysis of pre-industrial humans roles in their ecosystems beyond food webs, to include other interactions such as using other species for tools and clothing. It was presented during a Scientific Session at the 2019 annual conference of the American Association for the Advancement of Science.Dunne conducts her research at the Santa Fe Institute, where she is a resident professor and also serves as Vice President for Science. She was named Fellow of The Ecological Society of America in 2017 for deep and central contributions to the theory of food web analyses, including its extension to paleo food webs, and in 2020, was named Fellow of the Network Science Society (NetSci) for her “pioneering work elucidating the network structure of ecology, particularly food webs, highlighting the interplay of dynamics and structure of networks.”She has served on the editorial boards of Theory in Biosciences and The SFI Press, and was one of the original senior-level editors at the Journal of Complex Networks, Oxford University Press. Dunne also serves as an External Advisor to the National Socio-Environmental Synthesis Center (SESYNC), on the steering committee for ASU-SFI Center for Biosocial Complex Systems, and on the Board of Advisors for the science/culture magazine Nautilus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermogenic plant** Thermogenic plant: Thermogenic plants have the ability to raise their temperature above that of the surrounding air. Heat is generated in the mitochondria, as a secondary process of cellular respiration called thermogenesis. Alternative oxidase and uncoupling proteins similar to those found in mammals enable the process, which is still poorly understood. The role of thermogenesis: Botanists are not completely sure why thermogenic plants generate large amounts of excess heat, but most agree that it has something to do with increasing pollination rates. The most widely accepted theory states that the endogenous heat helps in spreading chemicals that attract pollinators to the plant. For example, the Voodoo lily uses heat to help spread its smell of rotting meat. This smell draws in flies which begin to search for the source of the smell. As they search the entire plant for the dead carcass, they pollinate the plant.Other theories state that the heat may provide a heat reward for the pollinator: pollinators are drawn to the flower for its warmth. This theory has less support because most thermogenic plants are found in tropical climates. The role of thermogenesis: Yet another theory is that the heat helps protect against frost damage, allowing the plant to germinate and sprout earlier than otherwise. For example, the skunk cabbage generates heat, which allows it to melt its way through a layer of snow in early spring. The heat, however, is mostly used to help spread its pungent odor and attract pollinators. Characteristics of thermogenic plants: Most thermogenic plants tend to be rather large. This is because the smaller plants do not have enough volume to create a considerable amount of heat. Large plants, on the other hand, have a lot of mass to create and retain heat.Thermogenic plants are also protogynous, meaning that the female part of the plant matures before the male part of the same plant. This reduces inbreeding considerably, as such a plant can be fertilized only by pollen from a different plant. This is why thermogenic plants release pungent odors to attract pollinating insects. Examples: Thermogenic plants are found in a variety of families, but Araceae in particular contains many such species. Examples from this family include the dead-horse arum (Helicodiceros muscivorus), the eastern skunk cabbage (Symplocarpus foetidus), the elephant foot yam (Amorphophallus paeoniifolius), elephant ear (Philodendron selloum), lords-and-ladies (Arum maculatum), and voodoo lily (Typhonium venosum). The titan arum (Amorphophallus titanum) uses thermogenically created water vapor to disperse its scent—that of rotting meat—above the cold air that settles over it at night in its natural habitat. Contrary to popular belief, the western skunk cabbage (Lysichiton americanus), a close relative from the family Araceae, is not thermogenic. Outside Araceae, the sacred lotus (Nelumbo nucifera) is thermogenic and endothermic, able to regulate its flower temperature to a certain range, an ability shared by at least one species in the non-photosynthetic parasitic genus Rhizanthes, Rhizanthes lowii. Heat production: Many endothermy plant species rely on alternative oxidase (AOX), which is an enzyme in the mitochondria organelle and is a part of the electron transport chain. The reduction of mitochondrial redox potential by alternative oxidase increases unproductive respiration. This metabolic process creates an excess of heat which warms thermogenic tissue or organs. Plants containing this alternative oxidase are unaffected by the effects of cyanide because AOX acts as electron acceptor collecting electrons from ubiquinol while bypassing the third electron complex. The AOX enzyme then reduces oxygen molecules to water without the presence of a proton gradient which in turn is very inefficient yielding a drop in free energy from Ubiquinol to oxygen which is released in heat.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PixMob** PixMob: PixMob is a wireless lighting technology of Eski Inc. that controls wearable LED devices: by using the wearable objects as pixels, an event's audience itself can become a display. The light effects produced by these LED devices can be controlled to match a light show, pulsate in sync with the music, react to the body movement, etc. PixMob was developed by the Montreal-based company Eski Inc. in 2010. The technology comes in different versions providing different ways to wirelessly control any of the objects. The latest version, PixMob VIDEO, debuted during the Super Bowl XLVIII Halftime Show. Technology: PixMob technology uses infrared to light up RGB LEDs that are embedded in different objects such as balls or wristbands. These wearable objects are given to an audience, transforming each individual into a pixel during the show. To light up each pixel (i.e. each LED), commands are sent from computers to transmitters that emit invisible light (infrared). The infrared signal is picked up by infrared receivers in each object and goes through a tiny 8-bit microprocessor to light up the LEDs. The type of transmitter involved differs depending on the selected version of the technology. Wash fixtures or lekos, typically seen in the live entertainment industry, are usually used. For the PixMob video technology, VT transmitters beam video instructions onto the audience, almost like a matrix creating a virtual map. With this technology, the infrared receiver decodes the signal differently depending on each pixel/person's location. This enables the creation of animated video effects and transforms the audience into a display screen. Despite the low-resolution result due to a low number of pixels, quite detailed video effects can be achieved on a large canvas, using bright colors and bold movements. Uses: Microsoft Kinect Launch The technology debuted at the Microsoft Kinect Launch in June 2010, where white satin ponchos embedded with wirelessly-controlled LEDs were used to integrate the audience into the show. Uses: Arcade Fire – Coachella 2011 In 2011, Montreal's Arcade Fire used PixMob balls during their encore performance of "Wake Up" at Coachella Festival. The project, entitled Summer Into Dust, was sponsored by The Creators Project and produced by Radical Media. This was made possible due to Arcade Fire, Chris Milk, Moment Factory and Tangible Interaction. More than 1,250 glowing balls were dropped from the stage onto the audience. They contained battery-powered circuit boards studded with full-color LEDs that changed colors in unison, thanks to built-in infrared receivers and microphones. Uses: Tiësto Tiësto used PixMob wristbands for his 2014 residency at Hakkasan. During the Super Bowl XLVIII Halftime Show, he tweeted that he would use the video technology for the February 28 show at Hakkasan Las Vegas Restaurant and Nightclub. Uses: Super Bowl XLVIII Halftime Show PixMob launched their video version of the technology at the 2014 Super Bowl XLVIII halftime show. Each spectator received a black knitted hat called a "video ski hat" embedded with 3 LEDs and an infrared receiver. Just before the show, spectators were asked to put on their hats and remain seated to form a huge display. Wearing the video ski hats, each spectator became a pixel in a giant human screen composed of 80,000 pixels. Touchdown Entertainment, the company that produced the event, claimed it was "the largest ever LED screen". The spectators saw different kinds of visuals effects including a Pepsi logo moving around the stadium as well as images of the live Red Hot Chili Peppers' performance and fireworks display. Inspirations: Several sources of inspiration for the technology have been given by its inventors, David Parent and Vincent Leclerc, in interviews: the use of lighters in concerts, the Burning Man festival, fire rituals, as well as the large human screens made from crowd members holding placards in Korea. The co-founders explain that their goal is to augment the collective experience of being part of a show.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multitheoretical psychotherapy** Multitheoretical psychotherapy: Multitheoretical psychotherapy (MTP) is a new approach to integrative psychotherapy developed by Jeff E. Brooks-Harris and his colleagues at the University of Hawaii at Manoa. MTP is organized around five principles for integration: Intentional Multidimensional Multitheoretical Strategy-based RelationalBeing intentional involves making informed choices about the focus of treatment, theoretical conceptualization, intervention strategies, and relational stances. MTP encourages counselors to think in a multidimensional manner, recognizing the rich interaction between thoughts, actions, and feelings within the context of biology, interpersonal patterns, social systems, and cultural contexts. MTP uses a multitheoretical framework to organize training and treatment. Psychotherapists can use a combination of theories to formulate a multitheoretical conceptualization to understand clients and guide interventions. The combination of theorical ideas and interventions is based on the individual needs of clients. MTP encourages therapists to work interactively with thoughts, actions, and feelings: Cognitive strategies are used to encourage functional thoughts Behavioral interventions promote effective actions Experiential-humanistic skills can be used to explore adaptive feelings and personal experiences.Counselors are also encouraged to use theories that explore contextual dimensions that shape thinking, acting, and feeling Biopsychosocial strategies focus on biology and result in adaptive health practices Psychodynamic-interpersonal interventions are used to understand and modify interpersonal patterns Systemic-constructivist skills are used to explore family and social systems and encourage adaptive personal narratives Multicultural-feminist strategies encourage clients to adapt to cultural contexts and overcome oppressionMTP training involves building a repertoire of key strategies drawn from different theoretical approaches. Key strategies have been described using strategy markers (suggesting when a particular skill will be most useful) and expected consequences (predicting the likely outcome of a specific intervention). Training also involves learning how to combine ideas and strategies from different theories based on the individual needs of clients. Integrative treatment planning involves conducting a multidimensional survey, establishing an interactive focus on two or three dimensions, formulating a multitheoretical conceptualization, and choosing intervention strategies corresponding to focal dimensions. The Brooks-Harris (2008) text describes applications of MTP to depression, anxiety, substance abuse, and health problems. Multitheoretical psychotherapy: As a second-generation model of integrative psychotherapy, MTP combines features of earlier approaches. Like Arnold Lazarus' multimodal therapy, MTP encourages attention to the interaction of different dimensions. Like Prochaska and DiClemente's transtheoretical model, MTP describes the relationship between several different theories. Like Larry E. Beutler's systematic treatment selection, MTP predicts when particular strategies will be most useful.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arachidic acid** Arachidic acid: Arachidic acid, also known as icosanoic acid, is a saturated fatty acid with a 20-carbon chain. It is a minor constituent of cupuaçu butter (7%), perilla oil (0–1%), peanut oil (1.1–1.7%), corn oil (3%), and cocoa butter (1%). The salts and esters of arachidic acid are known as arachidates. Its name derives from the Latin arachis—peanut. It can be formed by the hydrogenation of arachidonic acid. Reduction of arachidic acid yields arachidyl alcohol. Arachidic acid is used for the production of detergents, photographic materials and lubricants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conjunctivitis** Conjunctivitis: Conjunctivitis, also known as pink eye, is inflammation of the outermost layer of the white part of the eye and the inner surface of the eyelid. It makes the eye appear pink or reddish. Pain, burning, scratchiness, or itchiness may occur. The affected eye may have increased tears or be "stuck shut" in the morning. Swelling of the white part of the eye may also occur. Itching is more common in cases due to allergies. Conjunctivitis can affect one or both eyes.The most common infectious causes are viral followed by bacterial. The viral infection may occur along with other symptoms of a common cold. Both viral and bacterial cases are easily spread between people. Allergies to pollen or animal hair are also a common cause. Diagnosis is often based on signs and symptoms. Occasionally, a sample of the discharge is sent for culture.Prevention is partly by handwashing. Treatment depends on the underlying cause. In the majority of viral cases, there is no specific treatment. Most cases due to a bacterial infection also resolve without treatment; however, antibiotics can shorten the illness. People who wear contact lenses and those whose infection is caused by gonorrhea or chlamydia should be treated. Allergic cases can be treated with antihistamines or mast cell inhibitor drops.About 3 to 6 million people get acute conjunctivitis each year in the United States. In adults, viral causes are more common, while in children, bacterial causes are more common. Typically, people get better in one or two weeks. If visual loss, significant pain, sensitivity to light or signs of herpes occur, or if symptoms do not improve after a week, further diagnosis and treatment may be required. Conjunctivitis in a newborn, known as neonatal conjunctivitis, may also require specific treatment. Signs and symptoms: Red eye, swelling of the conjunctiva, and watering of the eyes are symptoms common to all forms of conjunctivitis. However, the pupils should be normally reactive, and the visual acuity normal.Conjunctivitis is identified by inflammation of the conjunctiva, manifested by irritation and redness. Examination using a slit lamp (biomicroscope) may improve diagnostic accuracy. Examination of the palpebral conjunctiva, that overlying the inner aspects of the eyelids, is usually more diagnostic than examination of the bulbal conjunctiva, that overlying the sclera. Signs and symptoms: Viral Between 65% and 90% of cases of viral conjunctivitis are caused by adenoviruses. Signs and symptoms: Viral conjunctivitis is often associated with an infection of the upper respiratory tract, a common cold, or a sore throat. Its symptoms include excessive watering and itching. The infection usually begins in one eye but may spread easily to the other eye.Viral conjunctivitis manifests as a fine, diffuse pinkness of the conjunctiva which may be mistaken for iritis, but corroborative signs on microscopy, particularly numerous lymphoid follicles on the tarsal conjunctiva, and sometimes a punctate keratitis are seen. Signs and symptoms: Allergic Allergic conjunctivitis is inflammation of the conjunctiva due to allergy. The specific allergens may differ among patients. Symptoms result from the release of histamine and other active substances by mast cells, and consist of redness (mainly due to vasodilation of the peripheral small blood vessels), swelling of the conjunctiva, itching, and increased production of tears. Signs and symptoms: Bacterial Bacterial conjunctivitis causes the rapid onset of conjunctival redness, swelling of the eyelid, and a sticky discharge. Typically, symptoms develop first in one eye, but may spread to the other eye within 2–5 days. Conjunctivitis due to common pus-producing bacteria causes marked grittiness or irritation and a stringy, opaque, greyish or yellowish discharge that may cause the lids to stick together, especially after sleep. Severe crusting of the infected eye and the surrounding skin may also occur. The gritty or scratchy feeling is sometimes localized enough that patients may insist that they have a foreign body in the eye.Common bacteria responsible for nonacute bacterial conjunctivitis are Staphylococcus, Streptococcus, and Haemophilus species. Less commonly, Chlamydia spp. may be the cause. Signs and symptoms: Bacteria such as Chlamydia trachomatis or Moraxella spp. can cause a nonexudative but persistent conjunctivitis without much redness. Bacterial conjunctivitis may cause the production of membranes or pseudomembranes that cover the conjunctiva. Pseudomembranes consist of a combination of inflammatory cells and exudates and adhere loosely to the conjunctiva, while true membranes are more tightly adherent and cannot be easily peeled away. Cases of bacterial conjunctivitis that involve the production of membranes or pseudomembranes are associated with Neisseria gonorrhoeae, β-hemolytic streptococci, and Corynebacterium diphtheriae. C. diphtheriae causes membrane formation in conjunctiva of unimmunized children. Signs and symptoms: Chemical Chemical eye injury may result when an acidic or alkaline substance gets in the eye. Alkali burns are typically worse than acidic burns. Mild burns produce conjunctivitis, while more severe burns may cause the cornea to turn white. Litmus paper may be used to test for chemical causes. When a chemical cause has been confirmed, the eye or eyes should be flushed until the pH is in the range 6–8. Anaesthetic eye drops can be used to decrease the pain.Irritant or toxic conjunctivitis is primarily marked by redness. If due to a chemical splash, it is often present in only the lower conjunctival sac. With some chemicals, above all with caustic alkalis such as sodium hydroxide, necrosis of the conjunctiva marked by a deceptively white eye due to vascular closure may occur, followed by sloughing off of the dead epithelium. A slit lamp examination is likely to show evidence of anterior uveitis. Signs and symptoms: Biomarkers Omics technologies have been used to identify biomarkers that inform on the emergence and progression of conjunctivitis. For example, in chronic inflammatory cicatrizing conjunctivitis, active oxylipins, lysophospholipids, fatty acids, and endocannabinoids alterations, from which potential biomarkers linked to inflammatory processes were identified. Other Inclusion conjunctivitis of the newborn is a conjunctivitis that may be caused by the bacterium Chlamydia trachomatis, and may lead to acute, purulent conjunctivitis. However, it is usually self-healing. Causes: Infective conjunctivitis is most commonly caused by a virus. Bacterial infections, allergies, other irritants, and dryness are also common causes. Both bacterial and viral infections are contagious, passing from person to person or spread through contaminated objects or water. Contact with contaminated fingers is a common cause of conjunctivitis. Bacteria may also reach the conjunctiva from the edges of the eyelids and the surrounding skin, from the nasopharynx, from infected eye drops or contact lenses, from the genitals or the bloodstream. Infection by human adenovirus accounts for 65% to 90% of cases of viral conjunctivitis. Causes: Viral Adenoviruses are the most common cause of viral conjunctivitis (adenoviral keratoconjunctivitis). Herpetic keratoconjunctivitis, caused by herpes simplex viruses, can be serious and requires treatment with aciclovir. Acute hemorrhagic conjunctivitis is a highly contagious disease caused by one of two enteroviruses, enterovirus 70 and coxsackievirus A24. These were first identified in an outbreak in Ghana in 1969, and have spread worldwide since then, causing several epidemics. Causes: Bacterial The most common causes of acute bacterial conjunctivitis are Staphylococcus aureus, Streptococcus pneumoniae, and Haemophilus influenzae. Though very rare, hyperacute cases are usually caused by Neisseria gonorrhoeae or Neisseria meningitidis. Chronic cases of bacterial conjunctivitis are those lasting longer than 3 weeks, and are typically caused by S. aureus, Moraxella lacunata, or Gram-negative enteric flora. Causes: Allergic Conjunctivitis may also be caused by allergens such as pollen, perfumes, cosmetics, smoke, dust mites, Balsam of Peru, or eye drops. The most frequent cause of conjunctivitis is allergic conjunctivitis and it affects 15% to 40% of the population. Allergic conjunctivitis accounts for 15% of eye related primary care consultations; most including seasonal exposures in the spring and summer or perpetual conditions. Causes: Other Computer vision syndrome Dry eye syndrome Reactive arthritis: Conjunctivitis is part of the triad of reactive arthritis, which is thought to be caused by autoimmune cross-reactivity following certain bacterial infections. Reactive arthritis is highly associated with HLA-B27. Conjunctivitis is associated with the autoimmune disease relapsing polychondritis. Diagnosis: Cultures are not often taken or needed as most cases resolve either with time or typical antibiotics. If bacterial conjunctivitis is suspected, but no response to topical antibiotics is seen, swabs for bacterial culture should be taken and tested. Viral culture may be appropriate in epidemic case clusters.A patch test is used to identify the causative allergen in allergic conjunctivitis.Although conjunctival scrapes for cytology can be useful in detecting chlamydial and fungal infections, allergy, and dysplasia, they are rarely done because of the cost and the general dearth of laboratory staff experienced in handling ocular specimens. Conjunctival incisional biopsy is occasionally done when granulomatous diseases (e.g., sarcoidosis) or dysplasia are suspected. Diagnosis: Classification Conjunctivitis may be classified either by cause or by extent of the inflamed area. Causes Allergy Bacteria Viruses Chemicals AutoimmuneNeonatal conjunctivitis is often grouped separately from bacterial conjunctivitis because it is caused by different bacteria than the more common cases of bacterial conjunctivitis. By extent of involvement Blepharoconjunctivitis is the dual combination of conjunctivitis with blepharitis (inflammation of the eyelids).Keratoconjunctivitis is the combination of conjunctivitis and keratitis (corneal inflammation).Blepharokeratoconjunctivitis is the combination of conjunctivitis with blepharitis and keratitis. It is clinically defined by changes of the lid margin, meibomian gland dysfunction, redness of the eye, conjunctival chemosis and inflammation of the cornea. Diagnosis: Differential diagnosis Some more serious conditions can present with a red eye, such as infectious keratitis, angle-closure glaucoma, or iritis. These conditions require the urgent attention of an ophthalmologist. Signs of such conditions include decreased vision, significantly increased sensitivity to light, inability to keep the eye open, a pupil that does not respond to light, or a severe headache with nausea. Fluctuating blurring is common, due to tearing and mucoid discharge. Mild photophobia is common. However, if any of these symptoms is prominent, considering other diseases such as glaucoma, uveitis, keratitis, and even meningitis or carotico-cavernous fistula is important.A more comprehensive differential diagnosis for the red or painful eye includes: Corneal abrasion Subconjunctival hemorrhage Pinguecula Blepharitis Dacryocystitis Keratoconjunctivitis sicca (dry eye) Keratitis Herpes simplex Herpes zoster Episcleritis - an inflammatory condition that produces a similar appearance to conjunctivitis, but without discharge or tearing Uveitis Acute angle-closure glaucoma Endophthalmitis Orbital cellulitis Prevention: The most effective prevention is good hygiene, especially avoiding rubbing the eyes with infected hands. Vaccination against adenovirus, Haemophilus influenzae, pneumococcus, and Neisseria meningitidis is also effective.Povidone-iodine eye solution has been found to prevent neonatal conjunctivitis. It is becoming more commonly used globally because of its low cost. Management: Conjunctivitis resolves in 65% of cases without treatment, within 2–5 days. The prescription of antibiotics is not necessary in most cases. Viral Viral conjunctivitis usually resolves on its own and does not require any specific treatment. Antihistamines (e.g., diphenhydramine) or mast cell stabilizers (e.g., cromolyn) may be used to help with the symptoms. Povidone-iodine has been suggested as a treatment, but as of 2008, evidence to support it was poor. Allergic For allergic conjunctivitis, cool water poured over the face with the head inclined downward constricts capillaries, and artificial tears sometimes relieve discomfort in mild cases. In more severe cases, nonsteroidal anti-inflammatory medications and antihistamines may be prescribed. Persistent allergic conjunctivitis may also require topical steroid drops. Management: Bacterial Bacterial conjunctivitis usually resolves without treatment. Topical antibiotics may be needed only if no improvement is observed after 3 days. No serious effects were noted either with or without treatment. Because antibiotics do speed healing in bacterial conjunctivitis, their use may be considered. Antibiotics are also recommended for those who wear contact lenses, are immunocompromised, have disease which is thought to be due to chlamydia or gonorrhea, have a fair bit of pain, or have copious discharge. Gonorrheal or chlamydial infections require both oral and topical antibiotics.The choice of antibiotic varies based on the strain or suspected strain of bacteria causing the infection. Fluoroquinolones, sodium sulfacetamide, or trimethoprim/polymyxin may be used, typically for 7–10 days. Cases of meningococcal conjunctivitis can also be treated with systemic penicillin, as long as the strain is sensitive to penicillin.When investigated as a treatment, povidone-iodine ophthalmic solution has also been observed to have some effectiveness against bacterial and chlamydial conjunctivitis, with a possible role suggested in locations where topical antibiotics are unavailable or costly. Management: Chemical Conjunctivitis due to chemicals is treated via irrigation with Ringer's lactate or saline solution. Chemical injuries, particularly alkali burns, are medical emergencies, as they can lead to severe scarring and intraocular damage. People with chemically induced conjunctivitis should not touch their eyes to avoid spreading the chemical. Epidemiology: Conjunctivitis is the most common eye disease. Rates of disease is related to the underlying cause which varies by the age as well as the time of year. Acute conjunctivitis is most frequently found in infants, school-age children and the elderly. The most common cause of infectious conjunctivitis is viral conjunctivitis.It is estimated that acute conjunctivitis affects 6 million people annually in the United States.Some seasonal trends have been observed for the occurrence of different forms of conjunctivitis. In the northern hemisphere, the occurrence of bacterial conjunctivitis peaks from December to April, viral conjunctivitis peaks in the summer months and allergic conjunctivitis is more prevalent throughout the spring and summer. History: An adenovirus was first isolated by Rowe et al. in 1953. Two years later, Jawetz et al. published on epidemic keratoconjunctivitis.: 437  "Madras eye" is a colloquial term that has been used in India for the disease. Society and culture: Conjunctivitis imposes economic and social burdens. The cost of treating bacterial conjunctivitis in the United States was estimated to be $377 million to $857 million per year. Approximately 1% of all primary care office visits in the United States are related to conjunctivitis. Approximately 70% of all people with acute conjunctivitis present to primary care and urgent care.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sound post** Sound post: In a string instrument, the sound post or soundpost is a dowel inside the instrument under the treble end of the bridge, spanning the space between the top and back plates and held in place by friction. It serves as a structural support for an archtop instrument, transfers sound from the top plate to the back plate and alters the tone of the instrument by changing the vibrational modes of the plates. Sound post: The sound post is sometimes referred to as the âme, a French word meaning "soul". The bow has also been referred to as the soul of these instruments. The Italians use the same term, anima, for this.Sound posts are used: In all members of the violin family In some members of the viol family In some archtop guitars In other string instruments Sound post adjustment: The position of the sound post inside a violin is critical, and moving it by very small amounts (as little as 0.5mm or 0.25mm, or less) can make a big difference in the sound quality and loudness of an instrument. Specialized tools for standing up or moving a sound post are commercially available. Often the pointed end of an S-shaped setter is sharpened with a file and left rough, to grip the post a bit better. Sound post adjustment: Soundpost adjustment is as much art as science, depending on the ears, experience, structural sense, and sensitive touch of the luthier. The rough guidelines in the following section outline the effects of various moves, but the interaction of all the factors involved keeps it from being a simple process. Moving the sound post has very complex consequences on the sound. In the end, it is the ear of the person doing the adjusting that determines the desired location of the post. Effect of position on the instrument: Moving the sound post towards the fingerboard tends to increase brilliance and loudness. Moving the sound post towards the tail piece decreases the loudness and adds a richness or hollowness to the tonal quality of the instrument. Moving it towards the outside of the instrument increases brightness and moving in towards the middle of the instrument increases the lower frequencies. There is very little room to move the post from side to side without fitting a new post (or shortening the existing one) since tension (how firmly the post is wedged between top and back) plays an important role in tone adjustment. Perfect wood-to-wood fit at both ends of the post is critical to getting the desired sound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fuel pump** Fuel pump: A fuel pump is a component used in many liquid-fuelled engines (such as petrol/gasoline or diesel engines) to transfer the fuel from the fuel tank to the device where it is mixed with the intake air (such as the carburetor or fuel injector). Carbureted engines often use low-pressure mechanical pumps that are mounted on the engine. Fuel injected engines use either electric fuel pumps mounted inside the fuel tank (for lower pressure manifold injection systems) or high-pressure mechanical pumps mounted on the engine (for high-pressure direct injection systems). Some engines do not use any fuel pump at all. A low-pressure fuel supply used by a carbureted engine can be achieved through a gravity feed system, i.e. by simply mounting the tank higher than the carburetor. This method is commonly used in carbureted motorcycles, where the tank is usually directly above the engine. Low-pressure mechanical pumps: On engines that use a carburetor (e.g. in older cars, lawnmowers and power tools), a mechanical fuel pump is typically used in order to transfer fuel from the fuel tank into the carburetor. These fuel pumps operate at a relatively low fuel pressure of 10–15 psi (0.7–1.0 bar). The two most widely used types of mechanical pumps are diaphragm pumps and plunger pumps. High-pressure mechanical pumps: Pumps for modern direct-injection engines operate at a much higher pressure, up to 30,000 psi (2,100 bar) and have configurations such as common rail radial piston, common rail two piston radial, inline, port and helix, and metering unit. Injection pumps are fuel lubricated which prevents oil from contaminating the fuel. High-pressure mechanical pumps: Port and Helix pumps Port and Helix pumps are most commonly used in marine diesel engines because of their simplicity, reliability, and its ability to be scaled up in proportion to the engine size. The pump is similar to that of a radial piston-type pump, but instead of a piston it has a machined plunger that has no seals. When the plunger is at top dead center, the injection to the cylinder is finished and it is returned on its downward stroke by a compression spring.Due to the fixed height of a cam lobe, the amount of fuel being pumped to the injector is controlled by a rack and pinion device that rotates the plunger, thus allowing variable amounts of fuel to the area above the plunger. The fuel is then forced through a check valve and into the fuel injector nozzle. High-pressure mechanical pumps: Plunger-type pumps Plunger-type pumps are a type of positive-displacement pump used by diesel engines. These pumps contain a chamber whose volume is increased and/or decreased by a moving plunger, along with check valves at the inlet and discharge ports. It is similar to that of a piston pump, but the high-pressure seal is stationary while the smooth cylindrical plunger slides through the seal. Plunger-type pumps are often mounted on the side of the injection pump and driven by the camshaft. These pumps usually run at a fuel pressure of 3,600–26,100 psi (250–1,800 bar). Electric pumps: In fuel-injected petrol engines, an electric fuel pump is typically located inside the fuel tank. For older port injection and throttle-body injection systems, this "in-tank" fuel pump transports the fuel from the fuel tank to the engine, as well as pressurising the fuel to typically 40–60 psi (3–4 bar). While for direct-injection systems, the in-tank fuel pump transports the fuel to the engine, where a separate fuel pump pressurises the fuel (to a much higher pressure). Electric pumps: Since the electric pump does not require mechanical power from the engine, it is feasible to locate the pump anywhere between the engine and the fuel tank. The reasons that the fuel pump is typically located in the fuel tank are: By submerging the pump in fuel at the bottom of the tank, the pump is cooled by the surrounding fuel Liquid fuel by itself (i.e. without oxygen present) isn't flammable, therefore surrounding the fuel pump by fuel reduces the risk of fireIn-tank fuel pumps are often part of an assembly consisting of the fuel pump, fuel strainer and fuel level sensor (the latter used for the fuel gauge). Turbopumps: Rocket engines use a turbopump to supply the fuel and oxidizer into the combustion chamber.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BODIPY** BODIPY: BODIPY is the technical common name of a chemical compound with formula C9H7BN2F2, whose molecule consists of a boron difluoride group BF2 joined to a dipyrromethene group C9H7N2; specifically, the compound 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene in the IUPAC nomenclature. The common name is an abbreviation for "boron-dipyrromethene". It is a red crystalline solid, stable at ambient temperature, soluble in methanol.The compound itself was isolated only in 2009, but many derivatives—formally obtained by replacing one or more hydrogen atoms by other functional groups—have been known since 1968, and comprise the important class of BODIPY dyes. These organoboron compounds have attracted much interest as fluorescent dyes and markers in biological research. Structure: In its crystalline solid form, the core BODIPY is almost, but not entirely, planar and symmetrical; except for the two fluorine atoms, that lie on the perpendicular bisecting plane. Its bonding can be explained by assuming a formal negative charge on the boron atom, and a formal positive charge on one of the nitrogen atoms. Synthesis: BODIPY and its derivatives can be obtained by reacting the corresponding 2,2'-dipyrromethene derivatives with boron trifluoride-diethyl ether complex (BF3·(C2H5)2O) in the presence of triethylamine or 1,8-diazabicyclo[5.4.0]undec-7-ene (DBU). The difficulty of the synthesis was due to instability of the usual dipyrromethene precursor, rather than of BODIPY itself.The dipyrromethene precursors are accessed from a suitable pyrrole derivatives by several methods. Normally, one alpha-position in employed pyrroles is substituted and the other is free. Condensation of such pyrrole, often available from Knorr pyrrole synthesis, with an aromatic aldehyde in the presence of TFA gives dipyrromethane, which is oxidized to dipyrromethene using a quinone oxidant such as DDQ or p-chloranil.Alternatively, dipyrromethenes are prepared by treating a pyrrole with an activated carboxylic acid derivative, usually an acyl chloride. Unsymmetrical dipyrromethenes can be obtained by condensing pyrroles with 2-acylpyrroles. Intermediate dipyrromethanes may be isolated and purified, but isolation of dipyrromethenes is usually compromised by their instability. Derivatives: The BODIPY core has a rich derivative chemistry due to the high tolerance for substitutions in the pyrrole and aldehyde (or acyl chloride) starting materials.Hydrogen atoms at the 2 and 6 positions of the cyclic core can be displaced by halogen atoms using succinimide reagents such as NCS, NBS and NIS - which allows for further post-functionalisation through palladium coupling reactions with boronate esters, tin reagents etc.The two fluorine atoms on the boron atom can be replaced, during or after synthesis, by other strong nucleophilic reagents, such as lithiated alkyne or aryl species, chlorine, methoxy, or a divalent "strap". The reaction is catalysed by BBr3 or SnCl4. Fluorescence: BODIPY and many of its derivatives have received attention recently for being fluorescent dyes with unique properties. They strongly absorb UV-radiation and re-emit it in very narrow frequency spreads, with high quantum yields, mostly at wavelengths below 600 nm. They are relatively insensitive to the polarity and pH of their environment and are reasonably stable to physiological conditions. Small modifications to their structures enable tuning of their fluorescence characteristics. BODIPY dyes are relatively chemically inert. Fluorescence is quenched in a solution, which limits application. This problem has been handled by synthesizing asymmetric boron complexes and replacing the fluorine groups with phenyl groups. Fluorescence: The unsubstituted BODIPY has a broad absorption band, from about 420 to 520 nm (peaking at 503 nm) and a broad emission band from about 480 to 580 nm (peaking at 512 nm), with a fluorescence lifetime of 7.2 ns. Its fluorescence quantum yield is near 1, greater than that of substituted BODIPY dyes and comparable to those of rhodamine and fluorescein, but fluorescence is lost above 50 °C.BODIPY dyes are notable for their uniquely small Stokes shift, high, environment-independent fluorescence quantum yields, often approaching 100% even in water, sharp excitation and emission peaks contributing to overall brightness, and high solubility in many organic solvents. The combination of these qualities makes BODIPY fluorophores promising for imaging applications. The position of the absorption and emission bands remain almost unchanged in solvents of different polarity as the dipole moment and transition dipole are mutually orthogonal. Potential applications: BODIPY conjugates are widely studied as potential sensors and for labelling by exploiting its highly tunable optoelectronic properties.Numerous BODIPY derivatives are being investigated as electroactive species for single-substance redox flow batteries. In recent years, BODIPY derivatives are also being explored as photosensitizers for applications in photodynamic therapy and photocatalysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frédéric Gagey** Frédéric Gagey: Frédéric Gagey (born 29 June 1956) is a French businessman, the current CFO of Air France–KLM.Gagey is a graduate of the Ecole Polytechnique and the ENSAE School of Economics, Statistics and Finance. He also holds a master's degree in economics from the Université de Paris I.His career started at the French Bureau of Statistics (INSEE) and in the Ministry of Finance. From September 1994 to April 1997, he held major positions at Air Inter. Following the merger between that airline and Air France in April 1997, Gagey was appointed Vice President for privatization and financial communication at Air France. He then assumed up the position of financial director in June 1999.He joined KLM on January 1, 2005, before becoming Executive Vice President Financial Affairs. In 2012, he was appointed Chief Financial Officer at Air France. Frédéric Gagey: In 2016, Gagey was appointed Executive Vice President Finance of the Air France-KLM group.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endothoracic fascia** Endothoracic fascia: The endothoracic fascia is the layer of loose connective tissue deep to the intercostal spaces and ribs, separating these structures from the underlying pleura. This fascial layer is the outermost membrane of the thoracic cavity. The endothoracic fascia contains variable amounts of fat. It becomes more fibrous over the apices of the lungs as the suprapleural membrane. It separates the internal thoracic artery from the parietal pleura.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kolmogorov equations (continuous-time Markov chains)** Kolmogorov equations (continuous-time Markov chains): In mathematics and statistics, in the context of Markov processes, the Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, are a pair of systems of differential equations that describe the time evolution of the process's distribution. This article, as opposed to the article titled Kolmogorov equations, focuses on the scenario where we have a continuous-time Markov chain (so the state space Ω is countable). In this case, we can treat the Kolmogorov equations as a way to describe the probability P(x,s;y,t) , where x,y∈Ω (the state space) and t>s,t,s∈R≥0 are the final and initial times, respectively. The equations: For the case of a countable state space we put i,j in place of x,y . The Kolmogorov forward equations read ∂Pij∂t(s;t)=∑kPik(s;t)Akj(t) ,where A(t) is the transition rate matrix (also known as the generator matrix), while the Kolmogorov backward equations are ∂Pij∂s(s;t)=−∑kPkj(s;t)Aik(s) The functions Pij(s;t) are continuous and differentiable in both time arguments. They represent the probability that the system that was in state i at time s jumps to state j at some later time t>s . The continuous quantities Aij(t) satisfy 0. Background: The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space. In this formulation, it is assumed that the probabilities P(i,s;j,t) are continuous and differentiable functions of t>s . Also, adequate limit properties for the derivatives are assumed. Feller derives the equations under slightly different conditions, starting with the concept of purely discontinuous Markov process and then formulating them for more general state spaces. Feller proves the existence of solutions of probabilistic character to the Kolmogorov forward equations and Kolmogorov backward equations under natural conditions. Relation with the generating function: Still in the discrete state case, letting s=0 and assuming that the system initially is found in state i , the Kolmogorov forward equations describe an initial-value problem for finding the probabilities of the process, given the quantities Ajk(t) . We write pk(t)=Pik(0;t) where ∑kpk(t)=1 , then dpkdt(t)=∑jAjk(t)pj(t);pk(0)=δik,k=0,1,…. For the case of a pure death process with constant rates the only nonzero coefficients are Aj,j−1=μj,j≥1 . Letting Ψ(x,t)=∑kxkpk(t), the system of equations can in this case be recast as a partial differential equation for Ψ(x,t) with initial condition Ψ(x,0)=xi . After some manipulations, the system of equations reads, 1. History: A brief historical note can be found at Kolmogorov equations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**National Clinical Guideline Centre** National Clinical Guideline Centre: The National Guideline Centre (NGC), formerly known as the National Clinical Guideline Centre, is hosted by the Royal College of Physicians,The guidelines provide recommendations for good practice by healthcare professionals. The guidelines are also intended to help patients make informed decisions, to improve communication between the patient and healthcare professional, and to raise the profile of research work. They are generally provided in a full-length version and in various simplified formats for different purposes and audiences. Examples of guidelines produced by NCGC include: Patient experience (Guidance and Quality Standard), Epilepsy, Hypertension, Stable angina, Hip fracture, Anaemia management in chronic kidney disease, Sedation in children and young people, Nocturnal enuresis in children, Transient loss of consciousness, and Chronic heart failure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dual EC DRBG** Dual EC DRBG: Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator) is an algorithm that was presented as a cryptographically secure pseudorandom number generator (CSPRNG) using methods in elliptic curve cryptography. Despite wide public criticism, including the public identification of the possibility that the National Security Agency put a backdoor into a recommended implementation, it was for seven years one of four CSPRNGs standardized in NIST SP 800-90A as originally published circa June 2006, until it was withdrawn in 2014. Weakness: a potential backdoor: Weaknesses in the cryptographic security of the algorithm were known and publicly criticised well before the algorithm became part of a formal standard endorsed by the ANSI, ISO, and formerly by the National Institute of Standards and Technology (NIST). One of the weaknesses publicly identified was the potential of the algorithm to harbour a kleptographic backdoor advantageous to those who know about it—the United States government's National Security Agency (NSA)—and no one else. In 2013, The New York Times reported that documents in their possession but never released to the public "appear to confirm" that the backdoor was real, and had been deliberately inserted by the NSA as part of its Bullrun decryption program. In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library, which resulted in RSA Security becoming the most important distributor of the insecure algorithm. RSA responded that they "categorically deny" that they had ever knowingly colluded with the NSA to adopt an algorithm that was known to be flawed, but also stated "we have never kept [our] relationship [with the NSA] a secret".Sometime before its first known publication in 2004, a possible kleptographic backdoor was discovered with the Dual_EC_DRBG's design, with the design of Dual_EC_DRBG having the unusual property that it was theoretically impossible for anyone but Dual_EC_DRBG's designers (NSA) to confirm the backdoor's existence. Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG. The backdoor would allow NSA to decrypt for example SSL/TLS encryption which used Dual_EC_DRBG as a CSPRNG.Members of the ANSI standard group to which Dual_EC_DRBG was first submitted were aware of the exact mechanism of the potential backdoor and how to disable it, but did not elect to disable or publicize the backdoor. The general cryptographic community was initially not aware of the potential backdoor, until Dan Shumow and Niels Ferguson's publication, or of Certicom's Daniel R. L. Brown and Scott Vanstone's 2005 patent application describing the backdoor mechanism. Weakness: a potential backdoor: In September 2013, The New York Times reported that internal NSA memos leaked by Edward Snowden indicated that the NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard, and concluded that the Dual_EC_DRBG standard did indeed contain a backdoor for the NSA. In response, NIST stated that "NIST would not deliberately weaken a cryptographic standard", but according to the New York Times story, the NSA had been spending $250 million per year to insert backdoors in software and hardware as part of the Bullrun program. A Presidential advisory committee subsequently set up to examine NSA's conduct recommended among other things that the US government "fully support and not undermine efforts to create encryption standards".On April 21, 2014, NIST withdrew Dual_EC_DRBG from its draft guidance on random number generators recommending "current users of Dual_EC_DRBG transition to one of the three remaining approved algorithms as quickly as possible." Description: Overview The algorithm uses a single integer s as state. Whenever a new random number is requested, this integer is updated. The k-th state is given by sk=gP(sk−1) The returned random integer r is a function of the state. The k-th random number is rk=gQ(sk) The function gP(x) depends on the fixed elliptic curve point P. gQ(x) is similar except that it uses the point Q. The points P and Q stay constant for a particular implementation of the algorithm. Description: Details The algorithm allows for different constants, variable output length and other customization. For simplicity, the one described here will use the constants from curve P-256 (one of the 3 sets of constants available) and have fixed output length. The algorithm operates exclusively over a prime finite field Fp (Z/pZ ) where p is prime. The state, the seed and the random numbers are all elements of this field. Field size is 00000000 7179 84 632551 16 An elliptic curve over Fp is given y2=x3−3x+b where the constant b is 635 93 55769886 651 06 53 63 27 2604 16 The points on the curve are E(Fp) . Two of these points are given as the fixed points P and Q P,Q∈E(Fp) Their coordinates are 17 12 4247 63 440 77037 81 33 13945 898 296 16 342 16 3357 315 64068 37 51 16 97445 05 585 297 235 82 67 598 52018192 16 28 557 31 21 46 91 304 44 87058 81515 610046 16 A function to extract the x-coordinate is used. It "converts" from elliptic curve points to elements of the field. Description: X(x,y)=x Output integers are truncated before being output mod 16 The functions gP and gQ . These functions raise the fixed points to a power. "Raising to a power" in this context, means using the special operation defined for points on elliptic curves. gP(x)=X(Px) gQ(x)=t(X(Qx)) The generator is seeded with an element from Fp s1=gP(seed) The k-th state and random number sk=gP(sk−1) rk=gQ(sk) The random numbers r1,r2,… Security: The stated purpose of including the Dual_EC_DRBG in NIST SP 800-90A is that its security is based on computational hardness assumptions from number theory. A mathematical security reduction proof can then prove that as long as the number theoretical problems are hard, the random number generator itself is secure. However, the makers of Dual_EC_DRBG did not publish a security reduction for Dual_EC_DRBG, and it was shown soon after the NIST draft was published that Dual_EC_DRBG was indeed not secure, because it output too many bits per round. The output of too many bits (along with carefully chosen elliptic curve points P and Q) is what makes the NSA backdoor possible, because it enables the attacker to revert the truncation by brute force guessing. The output of too many bits was not corrected in the final published standard, leaving Dual_EC_DRBG both insecure and backdoored.In many other standards, constants that are meant to be arbitrary are chosen by the nothing up my sleeve number principle, where they are derived from pi or similar mathematical constants in a way that leaves little room for adjustment. However, Dual_EC_DRBG did not specify how the default P and Q constants were chosen, possibly because they were constructed by NSA to be backdoored. Because the standard committee were aware of the potential for a backdoor, a way for an implementer to choose their own secure P and Q were included. But the exact formulation in the standard was written such that use of the alleged backdoored P and Q was required for FIPS 140-2 validation, so the OpenSSL project chose to implement the backdoored P and Q, even though they were aware of the potential backdoor and would have preferred generating their own secure P and Q. New York Times would later write that NSA had worked during the standardization process to eventually become the sole editor of the standard.A security proof was later published for Dual_EC_DRBG by Daniel R.L. Brown and Kristian Gjøsteen, showing that the generated elliptic curve points would be indistinguishable from uniformly random elliptic curve points, and that if fewer bits were output in the final output truncation, and if the two elliptic curve points P and Q were independent, then Dual_EC_DRBG is secure. The proof relied on the assumption that three problems were hard: the decisional Diffie–Hellman assumption (which is generally accepted to be hard), and two newer less-known problems which are not generally accepted to be hard: the truncated point problem, and the x-logarithm problem. Dual_EC_DRBG was quite slow compared to many alternative CSPRNGs (which don't have security reductions), but Daniel R.L. Brown argues that the security reduction makes the slow Dual_EC_DRBG a valid alternative (assuming implementors disable the obvious backdoor). Note that Daniel R.L. Brown works for Certicom, the main owner of elliptic curve cryptography patents, so there may be a conflict of interest in promoting an EC CSPRNG. Security: The alleged NSA backdoor would allow the attacker to determine the internal state of the random number generator from looking at the output from a single round (32 bytes); all future output of the random number generator can then easily be calculated, until the CSPRNG is reseeded with an external source of randomness. This makes for example SSL/TLS vulnerable, since the setup of a TLS connection includes the sending of a randomly generated cryptographic nonce in the clear. NSA's alleged backdoor would depend on their knowing of the single e such that eQ=P . This is a hard problem if P and Q are set ahead of time, but it's easier if P and Q are chosen. e is a secret key presumably known only by NSA, and the alleged backdoor is a kleptographic asymmetric hidden backdoor. Matthew Green's blog post The Many Flaws of Dual_EC_DRBG has a simplified explanation of how the alleged NSA backdoor works by employing the discrete-log kleptogram introduced in Crypto 1997. Standardization and implementations: NSA first introduced Dual_EC_DRBG in the ANSI X9.82 DRBG in the early 2000s, including the same parameters which created the alleged backdoor, and Dual_EC_DRBG was published in a draft ANSI standard. Dual_EC_DRBG also exists in the ISO 18031 standard.According to John Kelsey (who together with Elaine Barker was listed as author of NIST SP 800-90A), the possibility of the backdoor by carefully chosen P and Q was brought up at an ANSI X9F1 Tool Standards and Guidelines Group meeting. When Kelsey asked Don Johnson of Cygnacom about the origin of Q, Johnson answered in a 27 October 2004 email to Kelsey that NSA had prohibited the public discussion of generation of an alternative Q to the NSA-supplied one.At least two members of the Members of the ANSI X9F1 Tool Standards and Guidelines Group which wrote ANSI X9.82, Daniel R. L. Brown and Scott Vanstone from Certicom, were aware of the exact circumstances and mechanism in which a backdoor could occur, since they filed a patent application in January 2005 on exactly how to insert or prevent the backdoor in DUAL_EC_DRBG. The working of the "trap door" mentioned in the patent is identical to the one later confirmed in Dual_EC_DRBG. Writing about the patent in 2014, commentator Matthew Green describes the patent as a "passive aggressive" way of spiting NSA by publicizing the backdoor, while still criticizing everybody on the committee for not actually disabling the backdoor they obviously were aware of. Brown and Vanstone's patent list two necessary conditions for the backdoor to exist: 1) Chosen Q An elliptic curve random number generator avoids escrow keys by choosing a point Q on the elliptic curve as verifiably random. Intentional use of escrow keys can provide for back up functionality. The relationship between P and Q is used as an escrow key and stored by for a security domain. The administrator logs the output of the generator to reconstruct the random number with the escrow key. Standardization and implementations: 2) Small output truncation [0041] Another alternative method for preventing a key escrow attack on the output of an ECRNG, shown in Figures 3 and 4 is to add a truncation function to ECRNG to truncate the ECRNG output to approximately half the length of a compressed elliptic curve point. Preferably, this operation is done in addition to the preferred method of Figure 1 and 2, however, it will be appreciated that it may be performed as a primary measure for preventing a key escrow attack. The benefit of truncation is that the list of R values associated with a single ECRNG output r is typically infeasible to search. For example, for a 160-bit elliptic curve group, the number of potential points R in the list is about 280, and searching the list would be about as hard as solving the discrete logarithm problem. The cost of this method is that the ECRNG is made half as efficient, because the output length is effectively halved. Standardization and implementations: According to John Kelsey, the option in the standard to choose a verifiably random Q was added as an option in response to the suspected backdoor, though in such a way that FIPS 140-2 validation could only be attained by using the possibly backdoored Q. Steve Marquess (who helped implement NIST SP 800-90A for OpenSSL) speculated that this requirement to use the potentially backdoored points could be evidence of NIST complicity. It is not clear why the standard did not specify the default Q in the standard as a verifiably generated nothing up my sleeve number, or why the standard did not use greater truncation, which Brown's patent said could be used as the "primary measure for preventing a key escrow attack". The small truncation was unusual compared to previous EC PRGs, which according to Matthew Green had only output 1/2 to 2/3 of the bits in the output function. The low truncation was in 2006 shown by Gjøsteen to make the RNG predictable and therefore unusable as a CSPRNG, even if Q had not been chosen to contain a backdoor. The standard says that implementations "should" use the small max_outlen provided, but gives the option of outputting a multiple of 8 fewer bits. Appendix C of the standard gives a loose argument that outputting fewer bits will make the output less uniformly distributed. Brown's 2006 security proof relies on outlen being much smaller the default max_outlen value in the standard. Standardization and implementations: The ANSI X9F1 Tool Standards and Guidelines Group which discussed the backdoor also included three employees from the prominent security company RSA Security. In 2004, RSA Security made an implementation of Dual_EC_DRBG which contained the NSA backdoor the default CSPRNG in their RSA BSAFE as a result of a secret $10 million deal with NSA. In 2013, after the New York Times reported that Dual_EC_DRBG contained a backdoor by the NSA, RSA Security said they had not been aware of any backdoor when they made the deal with NSA, and told their customers to switch CSPRNG. In the 2014 RSA Conference keynote, RSA Security Executive Chairman Art Coviello explained that RSA had seen declining revenue from encryption, and had decided to stop being "drivers" of independent encryption research, but to instead to "put their trust behind" the standards and guidance from standards organizations such as NIST.A draft of NIST SP 800-90A including the Dual_EC_DRBG was published in December 2005. The final NIST SP 800-90A including Dual_EC_DRBG was published in June 2006. Documents leaked by Snowden have been interpreted as suggesting that the NSA backdoored Dual_EC_DRBG, with those making the allegation citing the NSA's work during the standardization process to eventually become the sole editor of the standard. The early usage of Dual_EC_DRBG by RSA Security (for which NSA was later reported to have secretly paid $10 million) was cited by the NSA as an argument for Dual_EC_DRBG's acceptance into the NIST SP 800-90A standard. RSA Security subsequently cited Dual_EC_DRBG's acceptance into the NIST standard as a reason they used Dual_EC_DRBG.Daniel R. L. Brown's March 2006 paper on the security reduction of Dual_EC_DRBG mentions the need for more output truncation and a randomly chosen Q, but mostly in passing, and does not mention his conclusions from his patent that these two defects in Dual_EC_DRBG together can be used as a backdoor. Brown writes in the conclusion: "Therefore, the ECRNG should be a serious consideration, and its high efficiency makes it suitable even for constrained environments." Note that others have criticised Dual_EC_DRBG as being extremely slow, with Bruce Schneier concluding "It's too slow for anyone to willingly use it", and Matthew Green saying Dual_EC_DRBG is "Up to a thousand times slower" than the alternatives. The potential for a backdoor in Dual_EC_DRBG was not widely publicised outside of internal standard group meetings. It was only after Dan Shumow and Niels Ferguson's 2007 presentation that the potential for a backdoor became widely known. Shumow and Ferguson had been tasked with implementing Dual_EC_DRBG for Microsoft, and at least Furguson had discussed the possible backdoor in a 2005 X9 meeting. Bruce Schneier wrote in a 2007 Wired article that the Dual_EC_DRBG's flaws were so obvious that nobody would be use Dual_EC_DRBG: "It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it." Schneier was apparently unaware that RSA Security had used Dual_EC_DRBG as the default in BSAFE since 2004. Standardization and implementations: OpenSSL implemented all of NIST SP 800-90A including Dual_EC_DRBG at the request of a client. The OpenSSL developers were aware of the potential backdoor because of Shumow and Ferguson's presentation, and wanted to use the method included in the standard to choose a guarantied non-backdoored P and Q, but was told that to get FIPS 140-2 validation they would have to use the default P and Q. OpenSSL chose to implement Dual_EC_DRBG despite its dubious reputation for completeness, noting that OpenSSL tried to be complete and implements many other insecure algorithms. OpenSSL did not use Dual_EC_DRBG as the default CSPRNG, and it was discovered in 2013 that a bug made the OpenSSL implementation of Dual_EC_DRBG non-functioning, meaning that no one could have been using it.Bruce Schneier reported in December 2007 that Microsoft added Dual_EC_DRBG support to Windows Vista, though not enabled by default, and Schneier warned against the known potential backdoor. Windows 10 and later will silently replace calls to Dual_EC_DRBG with calls to CTR_DRBG based on AES.On September 9, 2013, following the Snowden leak, and the New York Times report on the backdoor in Dual_EC_DRBG, the National Institute of Standards and Technology (NIST) ITL announced that in light of community security concerns, it was reissuing SP 800-90A as draft standard, and re-opening SP800-90B/C for public comment. NIST now "strongly recommends" against the use of Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A. The discovery of a backdoor in a NIST standard has been a major embarrassment for the NIST.RSA Security had kept Dual_EC_DRBG as the default CSPRNG in BSAFE even after the wider cryptographic community became aware of the potential backdoor in 2007, but there does not seem to have been a general awareness of BSAFE's usage of Dual_EC_DRBG as a user option in the community. Only after widespread concern about the backdoor was there an effort to find software which used Dual_EC_DRBG, of which BSAFE was by far the most prominent found. After the 2013 revelations, RSA security Chief of Technology Sam Curry provided Ars Technica with a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators. The technical accuracy of the statement was widely criticized by cryptographers, including Matthew Green and Matt Blaze. On December 20, 2013, it was reported by Reuters that RSA had accepted a secret payment of $10 million from the NSA to set the Dual_EC_DRBG random number generator as the default in two of its encryption products. On December 22, 2013, RSA posted a statement to its corporate blog "categorically" denying a secret deal with the NSA to insert a "known flawed random number generator" into its BSAFE toolkit Following the New York Times story asserting that Dual_EC_DRBG contained a backdoor, Brown (who had applied for the backdoor patent and published the security reduction) wrote an email to an IETF mailing list defending the Dual_EC_DRBG standard process: 1. Dual_EC_DRBG, as specified in NIST SP 800-90A and ANSI X9.82-3, allows an alternative choice of constants P and Q. As far as I know, the alternatives do not admit a known feasible backdoor. In my view, it is incorrect to imply that Dual_EC_DRBG always has a backdoor, though I admit a wording to qualify the affected cases may be awkward. Standardization and implementations: 2. Many things are obvious in hindsight. I'm not sure if this was obvious. [...] 8. All considered, I don't see how the ANSI and NIST standards for Dual_EC_DRBG can be viewed as a subverted standard, per se. But maybe that's just because I'm biased or naive. Software and hardware which contained the possible backdoor: Implementations which used Dual_EC_DRBG would usually have gotten it via a library. At least RSA Security (BSAFE library), OpenSSL, Microsoft, and Cisco have libraries which included Dual_EC_DRBG, but only BSAFE used it by default. According to the Reuters article which revealed the secret $10 million deal between RSA Security and NSA, RSA Security's BSAFE was the most important distributor of the algorithm. There was a flaw in OpenSSL's implementation of Dual_EC_DRBG that made it non-working outside test mode, from which OpenSSL's Steve Marquess concludes that nobody used OpenSSL's Dual_EC_DRBG implementation.A list of products which have had their CSPRNG-implementation FIPS 140-2 validated is available at the NIST. The validated CSPRNGs are listed in the Description/Notes field. Note that even if Dual_EC_DRBG is listed as validated, it may not have been enabled by default. Many implementations come from a renamed copy of a library implementation.The BlackBerry software is an example of non-default use. It includes support for Dual_EC_DRBG, but not as default. BlackBerry Ltd has however not issued an advisory to any of its customers who may have used it, because they do not consider the probable backdoor a vulnerability. Jeffrey Carr quotes a letter from Blackberry: The Dual EC DRBG algorithm is only available to third party developers via the Cryptographic APIs on the [Blackberry] platform. In the case of the Cryptographic API, it is available if a 3rd party developer wished to use the functionality and explicitly designed and developed a system that requested the use of the API. Software and hardware which contained the possible backdoor: Bruce Schneier has pointed out that even if not enabled by default, having a backdoored CSPRNG implemented as an option can make it easier for NSA to spy on targets which have a software-controlled command-line switch to select the encryption algorithm, or a "registry" system, like most Microsoft products, such as Windows Vista: A Trojan is really, really big. You can’t say that was a mistake. It’s a massive piece of code collecting keystrokes. But changing a bit-one to a bit-two [in the registry to change the default random number generator on the machine] is probably going to be undetected. It is a low conspiracy, highly deniable way of getting a backdoor. So there’s a benefit to getting it into the library and into the product. Software and hardware which contained the possible backdoor: In December 2013, a proof of concept backdoor was published that uses the leaked internal state to predict subsequent random numbers, an attack viable until the next reseed. Software and hardware which contained the possible backdoor: In December 2015, Juniper Networks announced that some revisions of their ScreenOS firmware used Dual_EC_DRBG with the suspect P and Q points, creating a backdoor in their firewall. Originally it was supposed to use a Q point chosen by Juniper which may or may not have been generated in provably safe way. Dual_EC_DRBG was then used to seed ANSI X9.17 PRNG. This would have obfuscated the Dual_EC_DRBG output thus killing the backdoor. However, a "bug" in the code exposed the raw output of the Dual_EC_DRBG, hence compromising the security of the system. This backdoor was then backdoored itself by an unknown party which changed the Q point and some test vectors. Allegations that the NSA had persistent backdoor access through Juniper firewalls had already been published in 2013 by Der Spiegel.The kleptographic backdoor is an example of NSA's NOBUS policy, of having security holes that only they can exploit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Palladin** Palladin: Palladin is a protein that in humans is encoded by the PALLD gene. Palladin is a component of actin-containing microfilaments that control cell shape, adhesion, and contraction. Discovery: Palladin was characterised independently by two research groups, first in the lab of Carol Otey (in 2000) and then in the lab of Olli Carpén (in 2001). It is a part of the myotilin-myopalladin-palladin family and may play an important role in modulating the actin cytoskeleton. Palladin, in contrast to myotilin and myopalladin, which are expressed only in striated muscle, is expressed ubiquitously in cells of mesenchymal origin. Discovery: Palladin was named after the Italian Renaissance architect Andrea Palladio, reflecting its localization to architectural elements of the cell. Isoforms: In humans, it appears that seven different isoforms exist, some of which arise through alternative splicing. In mice, three major isoforms of palladin arise from a single gene. These isoforms contain between three and five copies (depending on the isoform) of an Ig-like domain and between one and two copies of a polyproline domain. Function: Palladin's precise biological role is poorly understood, but it has been shown to play a role in cytoskeletal organization, embryonic development, cell motility, scar formation in the skin, and nerve cell development. Disease linkage: Recently, it has been demonstrated that palladin RNA is overexpressed in patients with pancreatic neoplasia, and that palladin is both overexpressed and mutated in an inherited form of pancreatic cancer. The palladin mutation identified in familial pancreatic cancer may be unique to a single North American family, as this same mutation has not been found in any other European or North American populations, respectively, in two other genetic studies.Further, Salaria et al. have shown that palladin is overexpressed in the non-neoplastic stroma of pancreatic cancer, but only rarely in the cancer cells per se, suggesting that palladin's role in this disease may involve changes in the tumor microenvironmment. More research is clearly required before this protein and its role in neoplasia can be fully understood. Disease linkage: Disease-causing mutations have also been identified in the two other members of this gene family. Myotilin mutations cause a form of limb-girdle muscular dystrophy, and mutations in myopalladin cause an inherited form of heart disease (dilated cardiomyopathy). Interactions: PALLD has been shown to interact with EZR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Computational cybernetics** Computational cybernetics: Computational cybernetics is the integration of cybernetics and computational intelligence techniques. Though the term Cybernetics entered the technical lexicon in the 1940s and 1950s, it was first used informally as a popular noun in the 1960s, when it became associated with computers, robotics, Artificial Intelligence and Science fiction. The initial promise of cybernetics was that it would revolutionise the mathematical biologies (a blanket term that includes some kinds of AI) by its use of closed loop semantics rather than open loop mathematics to describe and control living systems and biological process behaviours. It is fair to say that this idealistic program goal remains generally unrealised. Computational cybernetics: While ‘philosophical’ treatments of cybernetics are common, especially in the biosciences, computational cybernetics has failed to gain traction in mainstream engineering and graduate education. This makes its specific achievements all the more remarkable. Feldman and Dyer (independently) discovered the true mechanism of somatic motor governance. This theory, called ‘equilibrium point theory’ by Feldman [1], and ‘neocybernetics’ by Dyer [2] debunks the concept of efference copy completely. While Cybernetics is primarily concerned with the study of control systems, computational cybernetics focuses on their automatic (complex, autonomic, flexible, adaptive) operation. Furthermore, computational cybernetics covers not only mechanical, but biological (living), social and economical systems. To achieve this goal, it uses research from the fields of communication theory, signal processing, information technology, control theory, the theory of adaptive systems, the theory of complex systems (game theory, and operational research). IEEE, a professional organization for the advancement of technology, has organized two international conferences focusing on computational cybernetics in 2008 and 2013.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wire** Wire: A wire is a flexible strand of metal. Wire is commonly formed by drawing the metal through a hole in a die or draw plate. Wire gauges come in various standard sizes, as expressed in terms of a gauge number or cross-sectional area. Wires are used to bear mechanical loads, often in the form of wire rope. In electricity and telecommunications signals, a "wire" can refer to an electrical cable, which can contain a "solid core" of a single wire or separate strands in stranded or braided forms. Usually cylindrical in geometry, wire can also be made in square, hexagonal, flattened rectangular, or other cross-sections, either for decorative purposes, or for technical purposes such as high-efficiency voice coils in loudspeakers. Edge-wound coil springs, such as the Slinky toy, are made of special flattened wire. History: In antiquity, jewelry often contains large amounts of wire in the form of chains and applied decoration that is accurately made and which must have been produced by some efficient, if not technically advanced, means. In some cases, strips cut from metal sheet were made into wire by pulling them through perforations in stone beads. This causes the strips to fold round on themselves to form thin tubes. This strip drawing technique was in use in Egypt by the 2nd Dynasty (c. 2890 – c. 2686 BCE). From the middle of the 2nd millennium BCE most of the gold wires in jewellery are characterised by seam lines that follow a spiral path along the wire. Such twisted strips can be converted into solid round wires by rolling them between flat surfaces or the strip wire drawing method. The strip twist wire manufacturing method was superseded by drawing in the ancient Old World sometime between about the 8th and 10th centuries AD. There is some evidence for the use of drawing further East prior to this period.Square and hexagonal wires were possibly made using a swaging technique. In this method a metal rod was struck between grooved metal blocks, or between a grooved punch and a grooved metal anvil. Swaging is of great antiquity, possibly dating to the beginning of the 2nd millennium BCE in Egypt and in the Bronze and Iron Ages in Europe for torcs and fibulae. Twisted square-section wires are a very common filigree decoration in early Etruscan jewelry. History: In about the middle of the 2nd millennium BCE, a new category of decorative tube was introduced which imitated a line of granules. True beaded wire, produced by mechanically distorting a round-section wire, appeared in the Eastern Mediterranean and Italy in the seventh century BCE, perhaps disseminated by the Phoenicians. Beaded wire continued to be used in jewellery into modern times, although it largely fell out of favour in about the tenth century CE when two drawn round wires, twisted together to form what are termed 'ropes', provided a simpler-to-make alternative. A forerunner to beaded wire may be the notched strips and wires which first occur from around 2000 BCE in Anatolia. History: Wire was drawn in England from the medieval period. The wire was used to make wool cards and pins, manufactured goods whose import was prohibited by Edward IV in 1463. The first wire mill in Great Britain was established at Tintern in about 1568 by the founders of the Company of Mineral and Battery Works, who had a monopoly on this. Apart from their second wire mill at nearby Whitebrook, there were no other wire mills before the second half of the 17th century. Despite the existence of mills, the drawing of wire down to fine sizes continued to be done manually. History: According to a description in the early 20th century, "[w]ire is usually drawn of cylindrical form; but it may be made of any desired section by varying the outline of the holes in the draw-plate through which it is passed in the process of manufacture. The draw-plate or die is a piece of hard cast-iron or hard steel, or for fine work it may be a diamond or a ruby. The object of utilising precious stones is to enable the dies to be used for a considerable period without losing their size, and so producing wire of incorrect diameter. Diamond dies must be rebored when they have lost their original diameter of hole, but metal dies are brought down to size again by hammering up the hole and then drifting it out to correct diameter with a punch." Production: Wire is often reduced to the desired diameter and properties by repeated drawing through progressively smaller dies, or traditionally holes in draw plates. After a number of passes the wire may be annealed to facilitate more drawing or, if it is a finished product, to maximise ductility and conductivity. Electrical wires are usually covered with insulating materials, such as plastic, rubber-like polymers, or varnish. Insulating and jacketing of wires and cables is nowadays done by passing them through an extruder. Formerly, materials used for insulation included treated cloth or paper and various oil-based products. Since the mid-1960s, plastic and polymers exhibiting properties similar to rubber have predominated. Production: Two or more wires may be wrapped concentrically, separated by insulation, to form coaxial cable. The wire or cable may be further protected with substances like paraffin, some kind of preservative compound, bitumen, lead, aluminum sheathing, or steel taping. Stranding or covering machines wind material onto wire which passes through quickly. Some of the smallest machines for cotton covering have a large drum, which grips the wire and moves it through toothed gears; the wire passes through the centre of disks mounted above a long bed, and the disks carry each a number of bobbins varying from six to twelve or more in different machines. A supply of covering material is wound on each bobbin, and the end is led on to the wire, which occupies a central position relatively to the bobbins; the latter being revolved at a suitable speed bodily with their disks, the cotton is consequently served on to the wire, winding in spiral fashion so as to overlap. If many strands are required the disks are duplicated, so that as many as sixty spools may be carried, the second set of strands being laid over the first.For heavier cables that are used for electric light and power as well as submarine cables, the machines are somewhat different in construction. The wire is still carried through a hollow shaft, but the bobbins or spools of covering material are set with their spindles at right angles to the axis of the wire, and they lie in a circular cage which rotates on rollers below. The various strands coming from the spools at various parts of the circumference of the cage all lead to a disk at the end of the hollow shaft. This disk has perforations through which each of the strands pass, thence being immediately wrapped on the cable, which slides through a bearing at this point. Toothed gears having certain definite ratios are used to cause the winding drum for the cable and the cage for the spools to rotate at suitable relative speeds which do not vary. The cages are multiplied for stranding with many tapes or strands, so that a machine may have six bobbins on one cage and twelve on the other. Forms: Solid Solid wire, also called solid-core or single-strand wire, consists of one piece of metal wire. Solid wire is useful for wiring breadboards. Solid wire is cheaper to manufacture than stranded wire and is used where there is little need for flexibility in the wire. Solid wire also provides mechanical ruggedness; and, because it has relatively less surface area which is exposed to attack by corrosives, protection against the environment. Forms: Stranded Stranded wire is composed of a number of small wires bundled or wrapped together to form a larger conductor. Stranded wire is more flexible than solid wire of the same total cross-sectional area. Stranded wire is used when higher resistance to metal fatigue is required. Such situations include connections between circuit boards in multi-printed-circuit-board devices, where the rigidity of solid wire would produce too much stress as a result of movement during assembly or servicing; A.C. line cords for appliances; musical instrument cables; computer mouse cables; welding electrode cables; control cables connecting moving machine parts; mining machine cables; trailing machine cables; and numerous others. At high frequencies, current travels near the surface of the wire because of the skin effect, resulting in increased power loss in the wire. Stranded wire might seem to reduce this effect, since the total surface area of the strands is greater than the surface area of the equivalent solid wire, but ordinary stranded wire does not reduce the skin effect because all the strands are short-circuited together and behave as a single conductor. A stranded wire will have higher resistance than a solid wire of the same diameter because the cross-section of the stranded wire is not all copper; there are unavoidable gaps between the strands (this is the circle packing problem for circles within a circle). A stranded wire with the same cross-section of conductor as a solid wire is said to have the same equivalent gauge and is always a larger diameter. However, for many high-frequency applications, proximity effect is more severe than skin effect, and in some limited cases, simple stranded wire can reduce proximity effect. For better performance at high frequencies, litz wire, which has the individual strands insulated and twisted in special patterns, may be used. Forms: The more individual wire strands in a wire bundle, the more flexible, kink-resistant, break-resistant, and stronger the wire becomes. However, more strands increases manufacturing complexity and cost. For geometrical reasons, the lowest number of strands usually seen is 7: one in the middle, with 6 surrounding it in close contact. The next level up is 19, which is another layer of 12 strands on top of the 7. After that the number varies, but 37 and 49 are common, then in the 70 to 100 range (the number is no longer exact). Larger numbers than that are typically found only in very large cables. For application where the wire moves, 19 is the lowest that should be used (7 should only be used in applications where the wire is placed and then does not move), and 49 is much better. For applications with constant repeated movement, such as assembly robots and headphone wires, 70 to 100 is mandatory. For applications that need even more flexibility, even more strands are used (welding cables are the usual example, but also any application that needs to move wire in tight areas). One example is a 2/0 wire made from 5,292 strands of No. 36 gauge wire. The strands are organized by first creating a bundle of 7 strands. Then 7 of these bundles are put together into super bundles. Finally 108 super bundles are used to make the final cable. Each group of wires is wound in a helix so that when the wire is flexed, the part of a bundle that is stretched moves around the helix to a part that is compressed to allow the wire to have less stress. Forms: Prefused wire is stranded wire made up of strands that are heavily tinned, then fused together. Prefused wire has many of the properties of solid wire, except it is less likely to break. Braided A braided wire consists of a number of small strands of wire braided together. Braided wires do not break easily when flexed. Braided wires are often suitable as an electromagnetic shield in noise-reduction cables. Uses: Wire has many uses. It forms the raw material of many important manufacturers, such as the wire netting industry, engineered springs, wire-cloth making and wire rope spinning, in which it occupies a place analogous to a textile fiber. Wire-cloth of all degrees of strength and fineness of mesh is used for sifting and screening machinery, for draining paper pulp, for window screens, and for many other purposes. Vast quantities of aluminium, copper, nickel and steel wire are employed for telephone and data cables, and as conductors in electric power transmission, and heating. It is in no less demand for fencing, and much is consumed in the construction of suspension bridges, and cages, etc. In the manufacture of stringed musical instruments and scientific instruments, wire is again largely used. Carbon and stainless spring steel wire have significant applications in engineered springs for critical automotive or industrial manufactured parts/components. Pin and hairpin making; the needle and fish-hook industries; nail, peg, and rivet making; and carding machinery consume large amounts of wire as feedstock.Not all metals and metallic alloys possess the physical properties necessary to make useful wire. The metals must in the first place be ductile and strong in tension, the quality on which the utility of wire principally depends. The principal metals suitable for wire, possessing almost equal ductility, are platinum, silver, iron, copper, aluminium, and gold; and it is only from these and certain of their alloys with other metals, principally brass and bronze, that wire is prepared.By careful treatment, extremely thin wire can be produced. Special purpose wire is however made from other metals (e.g. tungsten wire for light bulb and vacuum tube filaments, because of its high melting temperature). Copper wires are also plated with other metals, such as tin, nickel, and silver to handle different temperatures, provide lubrication, and provide easier stripping of rubber insulation from copper. Uses: Metallic wires are often used for the lower-pitched sound-producing "strings" in stringed instruments, such as violins, cellos, and guitars, and percussive string instruments such as pianos, dulcimers, dobros, and cimbaloms. To increase the mass per unit length (and thus lower the pitch of the sound even further), the main wire may sometimes be helically wrapped with another, finer strand of wire. Such musical strings are said to be "overspun"; the added wire may be circular in cross-section ("round-wound"), or flattened before winding ("flat-wound"). Uses: Examples include: Hook-up wire is small-to-medium gauge, solid or stranded, insulated wire, used for making internal connections inside electrical or electronic devices. It is often tin-plated to improve solderability. Wire bonding is the application of microscopic wires for making electrical connections inside semiconductor components and integrated circuits. Uses: Magnet wire is solid wire, usually copper, which, to allow closer winding when making electromagnetic coils, is insulated only with varnish, rather than the thicker plastic or other insulation commonly used on electrical wire. It is used for the winding of motors, transformers, inductors, generators, speaker coils, etc. (For further information about copper magnet wire, see: Copper wire and cable#Magnet wire (Winding wire).). Uses: Coaxial cable is a cable consisting of an inner conductor, surrounded by a tubular insulating layer typically made from a flexible material with a high dielectric constant, all of which is then surrounded by another conductive layer (typically of fine woven wire for flexibility, or of a thin metallic foil), and then finally covered again with a thin insulating layer on the outside. The term coaxial comes from the inner conductor and the outer shield sharing the same geometric axis. Coaxial cables are often used as a transmission line for radio frequency signals. In a hypothetical ideal coaxial cable, the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors. Practical cables achieve this objective to a high degree. A coaxial cable provides extra protection of signals from external electromagnetic interference and effectively guides signals with low emission along the length of the cable which in turn affects thermal heat inside the conductivity of the wire. Uses: Speaker wire is used to make a low-resistance electrical connection between loudspeakers and audio amplifiers. Some high-end modern speaker wire consists of multiple electrical conductors individually insulated by plastic, similar to Litz wire. Resistance wire is wire with higher than normal resistivity, often used for heating elements or for making wire-wound resistors. Nichrome wire is the most common type.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stargate (asterism)** Stargate (asterism): The Stargate Asterism or Stargate Cluster is an asterism in the constellation Corvus consisting of six stars, also known as STF 1659. The stars form vertices of two nested triangles, resembling a portal device featured in the Buck Rogers science fiction TV series.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**C to HDL** C to HDL: C to HDL tools convert C language or C-like computer code into a hardware description language (HDL) such as VHDL or Verilog. The converted code can then be synthesized and translated into a hardware device such as a field-programmable gate array. Compared to software, equivalent designs in hardware consume less power (yielding higher performance per watt) and execute faster with lower latency, more parallelism and higher throughput. However, system design and functional verification in a hardware description language can be tedious and time-consuming, so systems engineers often write critical modules in HDL and other modules in a high-level language and synthesize these into HDL through C to HDL or high-level synthesis tools. C to RTL is another name for this methodology. RTL refers to the register transfer level representation of a program necessary to implement it in logic. History: Early development on C to HDL was done by Ian Page, Charles Sweeney and colleagues at Oxford University in the 1990s who developed the Handel-C language. They commercialized their research by forming Embedded Solutions Limited (ESL) in 1999 which was renamed Celoxica in September 2000. In 2008, the embedded systems departments of Celoxica was sold to Catalytic for $3 million and which later merged to become Agility Computing. In January 2009, Mentor Graphics acquired Agility's C synthesis assets. Celoxica continues to trade concentrating on hardware acceleration in the financial and other industries. Applications: C to HDL techniques are most commonly applied to applications that have unacceptably high execution times on existing general-purpose supercomputer architectures. Examples include bioinformatics, computational fluid dynamics (CFD), financial processing, and oil and gas survey data analysis. Embedded applications requiring high performance or real-time data processing are also an area of use. System-on-chip (SoC) design may also take advantage of C to HDL techniques. Applications: C-to-VHDL compilers are very useful for large designs or for implementing code that might change in the future. Designing a large application entirely in HDL may be very difficult and time-consuming; the abstraction of a high level language for such a large application will often reduce total development time. Furthermore, an application coded in HDL will almost certainly be more difficult to modify than one coded in a higher level language. If the designer needs to add new functionality to the application, adding a few lines of C code will almost always be easier than remodeling the equivalent HDL code. Applications: Flow to HDL tools have a similar aim, but with flow rather than C-based design. Example tools: LegUp Open Source ANSI C to Verilog tool, based on LLVM compiler. LegUp Commercial variant of LegUp. VHDP Simplified VHDL with support of procedural programming. bambu (free and open source ANSI C to Verilog tool based on GCC compiler) from PandA website CBG CtoV A tool developed 1995-99 by DJ Greaves (Univ Cambridge) that instantiated RAMs and interpreted various SystemC constructs and datatypes. Example tools: C-to-Verilog tool (NISC) from University of California, Irvine Altium Designer 6.9 and 7.0 (a.k.a. Summer 08) from Altium Nios II C-to-Hardware Acceleration Compiler from Altera Catapult C tool from Mentor Graphics Cynthesizer from Forte Design Systems SystemC from Celoxica (defunct) Handel-C from Celoxica (defunct) DIME-C from Nallatech Impulse C from Impulse Accelerated Technologies FpgaC which is an open source initiative SA-C programming language Cascade (C to RTL synthesizer) from CriticalBlue Mitrion-C from Mitrionics C2R Compiler from Cebatech PICO Express from Synfora SPARK (a C-to-VHDL) from University Of California, San Diego Hardware Compile Environment (HCE) from Accelize (formerly HARWEST Compiling Environment from Ylichron) HercuLeS (C/assembly-to-VHDL) tool VLSI/VHDL CAD Group Index of Useful Tools from CWRU University homepage DWARV as part of the research project ′Delft Work Bench′ and used in the ′hArtes tool chain′ MyHDL is a Python-subset compiler and simulator to VHDL and Verilog Trident (C to VHDL) from trident.sourceforge.net Vsyn (C to Verilog, Russian project) Instant SoC by FPGA Cores generates a SoC with RISC-V core, peripherals and memories directly from C++. Example tools: PipelineC C-like hardware description language adding High-level synthesis-like automatic pipelining as a language construct/compiler feature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded