text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Preferred walking speed**
Preferred walking speed:
The preferred walking speed is the speed at which humans or animals choose to walk. Many people tend to walk at about 1.42 metres per second (5.1 km/h; 3.2 mph; 4.7 ft/s). Individuals find slower or faster speeds uncomfortable.
Preferred walking speed:
Horses have also demonstrated normal, narrow distributions of preferred walking speed within a given gait, which suggests that the process of speed selection may follow similar patterns across species. Preferred walking speed has important clinical applications as an indicator of mobility and independence. For example, elderly people or people suffering from osteoarthritis must walk more slowly. Improving (increasing) people's preferred walking speed is a significant clinical goal in these populations.People have suggested mechanical, energetic, physiological and psychological factors as contributors to speed selection. Probably, individuals face a trade-off between the numerous costs associated with different walking speeds, and select a speed which minimizes these costs. For example, they may trade off time to destination, which is minimized at fast walking speeds, and metabolic rate, muscle force or joint stress. These are minimized at slower walking speeds. Broadly, increasing value of time, motivation, or metabolic efficiency may cause people to walk more quickly. Conversely, aging, joint pain, instability, incline, metabolic rate and visual decline cause people to walk more slowly.
Value of time:
Commonly, individuals place some value on their time. Economic theory therefore predicts that value-of-time is a key factor influencing preferred walking speed.
Value of time:
Levine and Norenzayan (1999) measured preferred walking speeds of urban pedestrians in 31 countries and found that walking speed is positively correlated with the country's per capita GDP and purchasing power parity, as well as with a measure of individualism in the country's society. It is plausible that affluence correlates with actual value considerations for time spent walking, and this may explain why people in affluent countries tend to walk more quickly.
Value of time:
This idea is broadly consistent with common intuition. Everyday situations often change the value of time. For example, when walking to catch a bus, the value of the one minute immediately before the bus has departed may be worth 30 minutes of time (the time saved not waiting for the next bus). Supporting this idea, Darley and Bateson show that individuals who are hurried under experimental conditions are less likely to stop in response to a distraction, and so they arrive at their destination sooner.
Energetics:
Energy minimization is widely considered a primary goal of the central nervous system. The rate at which an organism expends metabolic energy while walking (gross metabolic rate) increases nonlinearly with increasing speed. However, they also require a continuous basal metabolic rate to maintain normal function. The energetic cost of walking itself is therefore best understood by subtracting basal metabolic rate from total metabolic rate, yielding Final metabolic rate. In human walking, net metabolic rate also increases nonlinearly with speed. These measures of walking energetics are based on how much oxygen people consume per unit time. Many locomotion tasks, however, require walking a fixed distance rather than for a set time. Dividing gross metabolic rate by walking speed results in gross cost of transport. For human walking, gross cost of transport is U-shaped. Similarly, dividing net metabolic rate by walking speed yields a U-shaped net cost of transport. These curves reflect the cost of moving a given distance at a given speed and may better reflect the energetic cost associated with walking.
Energetics:
Ralston (1958) showed that humans tend to walk at or near the speed that minimizes gross cost of transport. He showed that gross cost of transport is minimized at about 1.23 m/s (4.4 km/h; 2.8 mph), which corresponded to the preferred speed of his subjects. Supporting this, Wickler et al. (2000) showed that the preferred speed of horses both uphill and on the level corresponds closely to the speed that minimizes their gross cost of transport. Among other gait costs that human walkers choose to minimize, this observation has led many to suggest that people minimize cost and maximize efficiency during locomotion. Because gross cost of transport includes velocity, gross cost of transport includes an inherent value of time. Subsequent research suggests that individuals may walk marginally faster than the speed that minimizes gross cost of transport under some experimental setups, although this may be due to how preferred walking speed was measured.In contrast, other researchers have suggested that gross cost of transport may not represent the metabolic cost of walking. People must continue to expend their basal metabolic rate regardless of whether they are walking, suggesting that the metabolic cost of walking should not include basal metabolic rate. Some researchers have therefore used net metabolic rate instead of gross metabolic rate to characterize the cost of locomotion. Net cost of transport reaches a minimum at about 1.05 m/s (3.8 km/h; 2.3 mph). Healthy pedestrians walk faster than this in many situations.
Energetics:
Metabolic input rate may also directly limit preferred walking speed. Aging is associated with reduced aerobic capacity (reduced VO2 max). Malatesta et al. (2004) suggests that walking speed in elderly individuals is limited by aerobic capacity; elderly individuals are unable to walk faster because they cannot sustain that level of activity. For example, 80-year-old individuals are walking at 60% of their VO2 max even when walking at speeds significantly slower than those observed in younger individuals.
Biomechanics:
Biomechanical factors such as mechanical work, stability, and joint or muscle forces may also influence human walking speed. Walking faster requires additional external mechanical work per step. Similarly, swinging the legs relative to the center of mass requires some internal mechanical work. As faster walking is accomplished with both longer and faster steps, internal mechanical work also increases with increasing walking speed. Therefore, both internal and external mechanical work per step increases with increasing speed. Individuals may try to reduce either external or internal mechanical work by walking more slowly, or may select a speed at which mechanical energy recovery is at a maximum.Stability may be another factor influencing speed selection. Hunter et al. (2010) showed that individuals use energetically suboptimal gaits when walking downhill. He suggests that people may instead be choosing gait parameters that maximize stability while walking downhill. This suggests that under adverse conditions such as down hills, gait patterns may favor stability over speed.Individual joint and muscle biomechanics also directly affect walking speed. Norris showed that elderly individuals walked faster when their ankle extensors were augmented by an external pneumatic muscle. Muscle force, specifically in the gastrocnemius and/or soleus, may limit walking speed in certain populations and lead to slower preferred speeds. Similarly, patients with ankle osteoarthritis walked faster after a complete ankle replacement than before. This suggests that reducing joint reaction forces or joint pain may factor into speed selection.
Visual flow:
The rate at which the environment flows past the eyes seems to be a mechanism for regulating walking speed. In virtual environments, the gain in visual flow can be decoupled from a person's actual walking speed, much as one might experience when walking on a conveyor belt. There, the environment flows past an individual more quickly than their walking speed would predict (higher than normal visual gain). At higher than normal visual gains, individuals prefer to walk more slowly, while at lower than normal visual gains, individuals prefer to walk more quickly. This behavior is consistent with returning the visually observed speed back toward the preferred speed and suggests that vision is used correctively to maintain walking speed at a value that is perceived to be optimal. Moreover, the dynamics of this visual influence on preferred walking speed are rapid—when visual gains are changed suddenly, individuals adjust their speed within a few seconds. The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that directly senses metabolic rate and iteratively adapts gait to minimize it.
As exercise:
With the wide availability of inexpensive pedometers, medical professionals recommend walking as an exercise for cardiac health and/or weight loss. The NIH gives the following guidelines: Based on currently available evidence, we propose the following preliminary indices be used to classify pedometer-determined physical activity in healthy adults: (i). <5000 steps/day may be used as a 'sedentary lifestyle index'; (ii). 5000-7499 steps/day is typical of daily activity excluding sports/exercise and might be considered 'low active'; (iii). 7500-9999 likely includes some volitional activities (and/or elevated occupational activity demands) and might be considered 'somewhat active'; and (iv). >or=10000 steps/day indicates the point that should be used to classify individuals as 'active'. Individuals who take >12500 steps/day are likely to be classified as 'highly active'.
As exercise:
The situation becomes slightly more complex when preferred walking speed is introduced. The faster the pace, the more calories burned if weight loss is a goal. Maximum heart rate for exercise (220 minus age), when compared to charts of "fat burning goals" support many of the references that give the average of 1.4 m/s (3.1 mph), as within this target range. Pedometers average 100 steps a minute in this range (depending on individual stride), or one and a half to two hours to reach a daily total of 10,000 or more steps (100 minutes at 100 steps per minute would be 10,000 steps).
In urban design:
The typical walking speed of 1.4 metres per second (5.0 km/h; 3.1 mph; 4.6 ft/s) is recommended by design guides including the Design Manual for Roads and Bridges. Transport for London recommend 1.33 metres per second (4.8 km/h; 3.0 mph; 4.4 ft/s) in the PTAL methodology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shaper**
Shaper:
In machining, a shaper is a type of machine tool that uses linear relative motion between the workpiece and a single-point cutting tool to machine a linear toolpath. Its cut is analogous to that of a lathe, except that it is (archetypally) linear instead of helical. A wood shaper is a functionally different woodworking tool, typically with a powered rotating cutting head and manually fed workpiece, usually known simply as a shaper in North America and spindle moulder in the UK.
Shaper:
A metalworking shaper is somewhat analogous to a metalworking planer, with the cutter riding a ram that moves relative to a stationary workpiece, rather than the workpiece moving beneath the cutter. The ram is typically actuated by a mechanical crank inside the column, though hydraulically actuated shapers are increasingly used. Adding axes of motion to a shaper can yield helical tool paths, as also done in helical planing.
Process:
A single-point cutting tool is rigidly held in the tool holder, which is mounted on the ram. The work piece is rigidly held in a vise or clamped directly on the table. The table may be supported at the outer end. The ram reciprocates and the cutting tool, held in the tool holder, moves forwards and backwards over the work piece. In a standard shaper, cutting of material takes place during the forward stroke of the ram and the return stroke remains idle. The return is governed by a quick return mechanism. The depth of the cut increments by moving the workpiece, and the workpiece is fed by a pawl and ratchet mechanism.
Types:
Shapers are mainly classified as standard, draw-cut, horizontal, universal, vertical, geared, crank, hydraulic, contour and traveling head, with a horizontal arrangement most common. Vertical shapers are generally fitted with a rotary table to enable curved surfaces to be machined (same idea as in helical planing). The vertical shaper is essentially the same thing as a slotter (slotting machine), although technically a distinction can be made if one defines a true vertical shaper as a machine whose slide can be moved from the vertical. A slotter is fixed in the vertical plane
Operation:
The workpiece mounts on a rigid, box-shaped table in front of the machine. The height of the table can be adjusted to suit this workpiece, and the table can traverse sideways underneath the reciprocating tool, which is mounted on the ram. Table motion may be controlled manually, but is usually advanced by an automatic feed mechanism acting on the feedscrew. The ram slides back and forth above the work. At the front end of the ram is a vertical tool slide that may be adjusted to either side of the vertical plane along the stroke axis. This tool-slide holds the clapper box and tool post, from which the tool can be positioned to cut a straight, flat surface on the top of the workpiece. The tool-slide permits feeding the tool downwards to deepen a cut. This flexibility, coupled with the use of specialized cutters and tool holders, enable the operator to cut internal and external gear teeth.
Operation:
The ram is adjustable for stroke and, due to the geometry of the linkage, it moves faster on the return (non-cutting) stroke than on the forward, cutting stroke. This return stroke is governed by a quick return mechanism.
Uses:
The most common use is to machine straight, flat surfaces, but with ingenuity and some accessories a wide range of work can be done. Other examples of its use are: Keyways in the boss of a pulley or gear can be machined without resorting to a dedicated broaching setup.
Dovetail slides Internal splines and gear teeth.
Uses:
Keyway, spline, and gear tooth cutting in blind holes Cam drums with toolpaths of the type that in CNC milling terms would require 4- or 5-axis contouring or turn-mill cylindrical interpolation It is even possible to obviate wire EDM work in some cases. Starting from a drilled or cored hole, a shaper with a boring-bar type tool can cut internal features that don't lend themselves to milling or boring (such as irregularly shaped holes with tight corners).
History:
Samuel Bentham developed a shaper between 1791 and 1793. However, Roe (1916) credits James Nasmyth with the invention of the shaper in 1836. Shapers were very common in industrial production from the mid-19th century through the mid-20th. In current industrial practice, shapers have been largely superseded by other machine tools (especially of the CNC type), including milling machines, grinding machines, and broaching machines. But the basic function of a shaper is still sound; tooling for them is minimal and very cheap to reproduce; and they are simple and robust in construction, making their repair and upkeep easily achievable. Thus they are still popular in many machine shops, from jobbing shops or repair shops to tool and die shops, where only one or a few pieces are required to be produced and the alternative methods are cost- or tooling-intensive. They also have considerable retro appeal to many hobbyist machinists, who are happy to obtain a used shaper or, in some cases, even to build a new one from scratch. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lightbot**
Lightbot:
Lightbot is an educational video game for learning software programming concepts, developed by Danny Yaroslavski. Lightbot has been played 7 million times, and is highly rated on iTunes and Google Play store. Lightbot is available as an online Flash game, and an application for Android and iOS mobile phones. Lightbot has been built with Flash and OpenFL.
Lightbot:
The goal of Lightbot is to command a little robot to navigate a maze and turn on lights. Players arrange symbols on the screen to command the robot to walk, turn, jump, switch on a light and so on. The maze and the list of symbols become more complicated as the lessons progress. While using such commands, players learn programming concepts like loops, procedures and more, without entering code in any programming language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Consolidated Tape System**
Consolidated Tape System:
The Consolidated Tape System (CTS) is the electronic service, introduced in April 1976, that provides last sale and trade data for issues admitted to dealings on the American Stock Exchange, New York Stock Exchange, and U.S. regional stock exchanges.The Consolidated Tape Association (CTA) is the operating authority for both the Consolidated Quotation System (CQS) and the Consolidated Tape System (CTS).In July 2023, the UK's Financial Conduct Authority (FCA) announced it was setting up a 'consolidated tape' system for City traders. As well as cutting costs and improving the quality of data, the FCA said the reforms would “increase transparency and access to trading data”. The consolidated tape system is to be set up initially for the UK's bonds market followed by equities. A competitive tender process is to be opened up that would see a single firm providing the CT for bonds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Doug Madory**
Doug Madory:
Doug Madory is an American Internet routing infrastructure expert, who specializes in analyzing Internet Border Gateway Protocol (BGP) routing data to diagnose Internet routing disruptions, such as those caused by communications fiber cable cuts, routing equipment failures, and governmental censorship. His academic background is in computer engineering, and he was a signals specialist in the U.S. Air Force, before arriving at his present specialty, which has occupied his professional career.
Education:
Madory received a bachelor's degree in computer engineering from the University of Virginia in 1999. He received a master's degree in computer engineering from Dartmouth College in 2006.
Career:
Madory joined Internet intelligence and technical analysis firm Renesys in 2009. Renesys was sold to DynDNS in May 2014, which in turn was sold to Oracle in April 2017. Madory remained in the same Director of Internet Analysis position throughout each of these transitions, before leaving Oracle to join Kentik in November 2020, in much the same role.
Discoveries:
Madory is best known for the discoveries that are the product of his Internet routing analysis: sometimes of interesting new phenomena on the Internet and sometimes of malfeasance online.
Discoveries:
ALBA-1 cable activation In 2013, Madory observed that Internet connection speeds in Cuba had suddenly improved. His investigation revealed that the ALBA-1 undersea fiber cable, which had been run from Venezuela to Cuba by the Venezuelan government in 2010 and 2011, had been activated following an unexplained dormancy of two years. This cable, linking the Cuban domestic network to the Internet via Telefonica, was Cuba's first non-satellite international connection, and was a major milestone in Cuba's liberalization. Uncharacteristically, the Cuban state organ Granma issued a confirmation two days later.
Discoveries:
National Internet shutdowns to prevent exam cheating Madory observed daily nationwide Internet shutdowns in Iraq for three hours each morning for several consecutive days, on the same dates in 2014 and 2015, and discovered that the government had mandated the shutdowns to coincide with gradeschool final examinations, in order to hamper test cheating. He has subsequently observed the same events in Syria.
Discoveries:
BackConnect IP address and BGP route hijacking In 2016, Madory collaborated with cybersecurity journalist Brian Krebs in an investigation of the Mirai botnet and DDoS attacks. In the course of that investigation, they discovered that DDoS mitigation firm BackConnect was engaging in "hack back" cyber-attacks against alleged DDoS perpetrators, engaging in the BGP hijacking of IP prefixes and routes, specifically those of vDOS, an Israeli "booter" DDoS-for-hire service hosted by Cloudflare. In the wake of publication, both Krebs and Madory's employer Dyn suffered retaliatory DDoS attacks.
Discoveries:
Global Resource Systems IP address hijacking On January 20, 2021, Madory observed a previously unknown Delaware shell company launching a process which would ultimately BGP advertise more than 175 million IPv4 addresses. Worth $5.6 billion at February 2021 prices, this was by far the largest aggregate block on the Internet, more than twice the size of Comcast. The addresses belonged to the US Department of Defense, so this initially appeared to be the largest IP address hijacking in history. Madory's analysis identified a stranger situation, though: the shell company, "Global Resource Systems," was in fact contracted to the DoD, but was one of a family of shell companies controlled by Rodney Joffe which were exposed by the indictment of Michael Sussmann and depositions conducted by Alfa-Bank, ongoing in parallel at the time of the apparent hijacking. What appeared to be a simple, if vast, IP address hijacking turned out to instead be a DoD contracting scandal linked to an election disinformation scandal.
Patents:
US patent 2020389535, "Methods, systems, and apparatus for geographic location using trace routes", published 2019-01-03 WO patent 2017147166, "Methods and apparatus for finding global routing hijacks", published 2017-08-31 US patent 11025553, "Methods and apparatus for real-time traffic steering using real-time user monitoring data", published 2021-06-01 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Faking (music)**
Faking (music):
In instrumental music, "faking" is the process by which a musician gives the "...impression of playing every note as written" in the printed music part, typically for a very challenging passage that is very high in pitch and/or very rapid, while not actually playing all of the notes in the part. Faking may be done by an orchestra musician, a concerto soloist or a chamber musician; however, faking tends to be more associated with orchestra playing, because the presence of such a large music ensemble (as many as 100 musicians) makes it easier for musicians who "fake" to do so without being detected. A concerto soloist or chamber musician who faked passages would be much easier for audience members and other musicians to detect. Orchestra musicians at every level, from amateur orchestras and youth orchestras to professional orchestra players will occasionally "fake" a hard passage.
Views:
In Chinese culture, there is a folktale about a man named Mr. Nanguo who fakes playing the yu in an ensemble, but runs into trouble when unexpectedly asked to produce solos. This idiom is known as 滥竽充数.
Views:
Faking is considered controversial in orchestral playing; The Strad magazine calls it one of the "great unmentionable [topics] of orchestral playing". A professional cellist states that all orchestral musicians, even those in the top orchestras, occasionally "fake" certain passages. Professional players who were interviewed were of a consensus that faking because a part is not written well for the instrument may be acceptable, but faking "just because you haven't practised" the music is not acceptable.A musician from the Canadian Music Centre stated that "...when I hear someone [a musician] say "I can just fake that" [music] is akin to nails on a chalkboard." The CMC musician states that as a "...performer I feel obligated to make sure I can play the music as well as I can. If that means I have to woodshed [(practice) a] lick up until the day of the concert that is what I will do, I can't personally accept "faking" it as an answer for any kind of music."The classical music comedy YouTube channel duo TwoSetViolin has made several videos reacting to and criticizing fake classical music portrayals.
Explanations:
One reason that musicians "fake" is because there are not enough rehearsals or time to learn the pieces.Another factor is the extreme challenges in contemporary pieces; professionals interviewed by the magazine said "faking" was "...necessary in anything from ten to almost ninety per cent of some modern works. Youth orchestra members and players in amateur or community orchestras may fake because the parts in professional orchestral repertoire are beyond their technical level. Gigging musicians playing in "one-off" pickup groups and local pit orchestras may fake because they do not have time to practice or prepare the music.
Other meanings:
In jazz, the term "fake" does not have the same meaning as in Classical music, and as well, it does not carry negative connotations. In jazz, when a jazz quartet "fakes" accompaniment parts to a song with a singer, this is a synonym for improvising their backup parts. Improvising backup lines (chord voicings for piano/guitar, basslines for bass, and drum parts for drum set) is an essential skill for jazz musicians. The use of the term "fake" in the jazz scene is illustrated by the expression "fake book", a collection of lead sheets and chord progressions for jazz standards (commonly-played jazz tunes). The reason the book is called a "fake book" is because trained jazz performers are able to improvise accompaniment parts and solos from the chord charts contained therein.
Comparison with miming:
There is some overlap between faking and miming in instrumental performance. The distinction is that with miming, the instrumentalists pretend to play while a pre-recorded backing track sounds over the PA system or, for a broadcast performance, on the audience's TV or radio; with faking, there is no backing track or use of technology. As well, with faking, the performer often plays some portion of the notated music. For example, with a fast scale run, an orchestral musician who is faking may play the first note and the last note. In contrast, a musician who is miming while the recording is playing over speakers does not need to make any sounds at all. They only need move their body, arms and fingers to give the appearance of playing. Indeed, in some miming contexts, the instrumentalists are instructed not to make any sounds at all, as these might be picked up by live vocal mics on the stage.
Comparison with miming:
While miming in instrumental performance is most often associated with popular music, due to the widespread use of lip-synching and miming instrumental playing on TV shows such as Top of the Pops (while the recording plays on the viewer's TV speakers), there are examples where producers have hired an orchestra or chamber musicians to appear on a stage and pretend to play, while the spectators (if in a live venue) or viewers (if a broadcast event) hear a previously recorded tape of that orchestra/ensemble (or a different orchestra or ensemble) playing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Push processing**
Push processing:
Push processing in photography, sometimes called uprating, refers to a film developing technique that increases the effective sensitivity of the film being processed. Push processing involves developing the film for more time, possibly in combination with a higher temperature, than the manufacturer's recommendations. This technique results in effective overdevelopment of the film, compensating for underexposure in the camera.
Visual characteristics:
Push processing allows relatively insensitive films to be used under lighting conditions that would ordinarily be too low for adequate exposure at the required shutter speed and aperture combination. This technique alters the visual characteristics of the film, such as higher contrast, increased grain and lower resolution. Saturated and distorted colours are often visible on colour film that has been push processed. Pull processing involves overexposure and underdevelopment, effectively decreasing the sensitivity of the processed film. It is achieved by developing the film for a shorter time, and possibly at a lower temperature. Film that has been pull processed will display the opposite change in visual characteristics. This may be deliberately exploited for artistic effect.
Exposure index:
When a film's effective sensitivity has been varied, the resulting sensitivity is called the exposure index; the film's speed remains at the manufacturer's indication. For example, an ISO 200/24° film could be push processed to EI 400/27° or pull processed to EI 100/21°.
In cinema:
John Alcott won an Oscar "for his gorgeous use of natural lighting" in Stanley Kubrick's 1975 period film Barry Lyndon, set in the 18th century, where he succeeded in filming scenes lit only by candlelight through the use of special wide-aperture Carl Zeiss Planar 50mm f/0.7 lenses designed for NASA for low-light shooting on Moon landings, and push-processing the film stock.Larry Smith, the cinematographer for Kubrick's 1999 film, Eyes Wide Shut, used push-processing of the film reels to bring out the intensity of the color.Paul Thomas Anderson and Michael Bauman used this technique on their 35mm film stock for the 2017 film Phantom Thread, also filling its frames with "theatrical haze" to "dirty up" the look of the film. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Machine translation of sign languages**
Machine translation of sign languages:
The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.
Limitations:
Sign language translation technologies are limited in the same way as spoken language translation. None can translate with 100% accuracy. In fact, sign language translation technologies are far behind their spoken language counterparts. This is, in no trivial way, due to the fact that signed languages have multiple articulators. Where spoken languages are articulated through the vocal tract, signed languages are articulated through the hands, arms, head, shoulders, torso, and parts of the face. This multi-channel articulation makes translating sign languages very difficult. An additional challenge for sign language MT is the fact that there is no formal written format for signed languages. There are notations systems but no writing system has been adopted widely enough, by the international Deaf community, that it could be considered the 'written form' of a given sign language. Sign Languages then are recorded in various video formats. There is no gold standard parallel corpus that is large enough for SMT, for example.
History:
The history of automatic sign language translation started with the development of hardware such as finger-spelling robotic hands. In 1977, a finger-spelling hand project called RALPH (short for "Robotic Alphabet") created a robotic hand that can translate alphabets into finger-spellings. Later, the use of gloves with motion sensors became the mainstream, and some projects such as the CyberGlove and VPL Data Glove were born. The wearable hardware made it possible to capture the signers' hand shapes and movements with the help of the computer software. However, with the development of computer vision, wearable devices were replaced by cameras due to their efficiency and fewer physical restrictions on signers. To process the data collected through the devices, researchers implemented neural networks such as the Stuttgart Neural Network Simulator for pattern recognition in projects such as the CyberGlove. Researchers also use many other approaches for sign recognition. For example, Hidden Markov Models are used to analyze data statistically, and GRASP and other machine learning programs use training sets to improve the accuracy of sign recognition. Fusion of non-wearable technologies such as cameras and Leap Motion controllers have shown to increase the ability of automatic sign language recognition and translation software.
Technologies:
VISICAST http://www.visicast.cmp.uea.ac.uk/Visicast_index.html eSIGN project http://www.visicast.cmp.uea.ac.uk/eSIGN/index.html The American Sign Language Avatar Project at DePaul University http://asl.cs.depaul.edu/ Spanish to LSE López-Ludeña, Verónica; San-Segundo, Rubén; González, Carlos; López, Juan Carlos; Pardo, José M. (2012). "Methodology for developing a Speech into Sign Language Translation System in a New Semantic Domain" (PDF). CiteSeerX 10.1.1.1065.5265. S2CID 2724186. {{cite journal}}: Cite journal requires |journal= (help) SignAloud SignAloud is a technology that incorporates a pair of gloves made by a group of students at University of Washington that transliterate American Sign Language (ASL) into English. In February 2015 Thomas Pryor, a hearing student from the University of Washington, created the first prototype for this device at Hack Arizona, a hackathon at the University of Arizona. Pryor continued to develop the invention and in October 2015, Pryor brought Navid Azodi onto the SignAloud project for marketing and help with public relations. Azodi has a rich background and involvement in business administration, while Pryor has a wealth of experience in engineering. In May 2016, the duo told NPR that they are working more closely with people who use ASL so that they can better understand their audience and tailor their product to the needs of these people rather than the assumed needs. However, no further versions have been released since then. The invention was one of seven to win the Lemelson-MIT Student Prize, which seeks to award and applaud young inventors. Their invention fell under the "Use it!" category of the award which includes technological advances to existing products. They were awarded $10,000.The gloves have sensors that track the users hand movements and then send the data to a computer system via Bluetooth. The computer system analyzes the data and matches it to English words, which are then spoken aloud by a digital voice. The gloves do not have capability for written English input to glove movement output or the ability to hear language and then sign it to a deaf person, which means they do not provide reciprocal communication. The device also does not incorporate facial expressions and other nonmanual markers of sign languages, which may alter the actual interpretation from ASL.
Technologies:
ProDeaf ProDeaf (WebLibras) is a computer software that can translate both text and voice into Portuguese Libras (Portuguese Sign Language) "with the goal of improving communication between the deaf and hearing." There is currently a beta edition in production for American Sign Language as well. The original team began the project in 2010 with a combination of experts including linguists, designers, programmers, and translators, both hearing and deaf. The team originated at Federal University of Pernambuco (UFPE) from a group of students involved in a computer science project. The group had a deaf team member who had difficulty communicating with the rest of the group. In order to complete the project and help the teammate communicate, the group created Proativa Soluções and have been moving forward ever since. The current beta version in American Sign Language is very limited. For example, there is a dictionary section and the only word under the letter 'j' is 'jump'. If the device has not been programmed with the word, then the digital avatar must fingerspell the word. The last update of the app was in June 2016, but ProDeaf has been featured in over 400 stories across the country's most popular media outlets.The application cannot read sign language and turn it into word or text, so it only serves as a one-way communication. Additionally, the user cannot sign to the app and receive an English translation in any form, as English is still in the beta edition.
Technologies:
Kinect Sign Language Translator Since 2012, researchers from the Chinese Academy of Sciences and specialists of deaf education from Beijing Union University in China have been collaborating with Microsoft Research Asian team to create Kinect Sign Language Translator. The translator consists of two modes: translator mode and communication mode. The translator mode is capable of translating single words from sign into written words and vice versa. The communication mode can translate full sentences and the conversation can be automatically translated with the use of the 3D avatar. The translator mode can also detect the postures and hand shapes of a signer as well as the movement trajectory using the technologies of machine learning, pattern recognition, and computer vision. The device also allows for reciprocal communication because the speech recognition technology allows the spoken language to be translated into the sign language and the 3D modeling avatar can sign back to the deaf people.The original project was started in China based on translating Chinese Sign Language. In 2013, the project was presented at Microsoft Research Faculty Summit and Microsoft company meeting. Currently, this project is also being worked by researchers in the United States to implement American Sign Language translation. As of now, the device is still a prototype, and the accuracy of translation in the communication mode is still not perfect.
Technologies:
SignAll SignAll is an automatic sign language translation system provided by Dolphio Technologies in Hungary. The team is "pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL." The system of SignAll uses Kinect from Microsoft and other web cameras with depth sensors connected to a computer. The computer vision technology can recognize the handshape and the movement of a signer, and the system of natural language processing converts the collected data from computer vision into a simple English phrase. The developer of the device is deaf and the rest of the project team consists of many engineers and linguist specialists from deaf and hearing communities. The technology has the capability of incorporating all five parameters of ASL, which help the device accurately interpret the signer. SignAll has been endorsed by many companies including Deloitte and LT-innovate and has created partnerships with Microsoft Bizspark and Hungary's Renewal. This technology is currently being used at Fort Bend Christian Academy in Sugar Land, Texas and at Sam Houston State University.
Technologies:
MotionSavvy MotionSavvy was the first sign language to voice system. The device was created in 2012 by a group from Rochester Institute of Technology / National Technical Institute for the Deaf and "emerged from the Leap Motion accelerator AXLR8R." The team used a tablet case that leverages the power of the Leap Motion controller. The entire six person team was created by deaf students from the schools deaf-education branch. The device is currently one of only two reciprocal communication devices solely for American Sign Language. It allows deaf individuals to sign to the device which is then interpreted or vice versa, taking spoken English and interpreting that into American Sign Language. The device is shipping for $198. Some other features include the ability to interact, live time feedback, sign builder, and crowdsign.
Technologies:
The device has been reviewed by everyone from technology magazines to Time. Wired said, "It wasn't hard to see just how transformative a technology like [UNI] could be" and that "[UNI] struck me as sort of magical."Katy Steinmetz at TIME said, "This technology could change the way deaf people live." Sean Buckley at Engadget mentioned, "UNI could become an incredible communication tool." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tenascin C**
Tenascin C:
Tenascin C (TN-C) is a glycoprotein that in humans is encoded by the TNC gene. It is expressed in the extracellular matrix of various tissues during development, disease or injury, and in restricted neurogenic areas of the central nervous system. Tenascin-C is the founding member of the tenascin protein family. In the embryo it is made by migrating cells like the neural crest; it is also abundant in developing tendons, bone and cartilage.
Gene and expression:
The human tenascin C gene, TN-C, is located on chromosome 9 with location of the cytogenic band at the 9q33. The entire Tenascin family coding region spans approximately 80 kilobases translating into 2203 amino acids.Expression of TN-C changes from development to adulthood. TN-C is highly expressed during embryogenesis and is briefly expressed during organogenesis, while in developed organs, expression is absent or in trace amounts. TN-C has been shown to be upregulated under pathological conditions caused by inflammation, infection, tumorigenesis, and at sites that are subject to unique biomechanics forces.The regulation of TN-C is induced or repressed by a number of different factors that are expressed during embryonic tissue, as well as developed tissues during remodeling, injured, or neoplastic. TGF-β1, tumor necrosis factor-α, interleukin-1, nerve growth factor, and keratinocyte growth factor are factors that have been shown to regulate TN-C. Other extracellular matrix components such as matrix metalloproteins and integrins are also frequently co-expressed with TN-C.In the developing central nervous system, TN-C is involved in regulating the proliferation of both oligodendrocyte precursor cells and astrocytes. Expression of TN-C by radial glia precedes the onset of gliogenesis, during which time it is thought to drive the differentiation of astrocytes.
Gene and expression:
In the adult brain, TN-C expression is downregulated except for the areas that maintain neurogenesis into adulthood and the hypothalamus.
TN-C is also present in central nervous system injuries and gliomas.
Structure:
Tenascin C is an oligomeric glycoprotein composed of individual polypeptides with molecular weights ranging from 180 to ~300kDa. The Tenascin family of proteins shares a similar structural pattern. These similar modules include heptad repeats, EGF-like repeats, fibronectin type III domains, and a C-terminal globular domain shared with fibrinogens. These protein modules are lined up like beads on a string and give rise to long and extended molecules. At the N-terminus each Tenascin has an oligomerization domain which in the case of TN-C leads to the formation of hexamers. TN-C and -R are known to be subject to alternative splicing. In human TN-C there exists, in addition to the eight constant repeats, nine extra repeats subject to alternative splicing. This results in a multitude of TN-C subunits differing in the number and identity of fibronectin type III domain repeats.
Interactions:
Tenascin-C has been shown to interact with fibronectin. This interaction is shown to have the potential to modify cell adhesion. A solid-state interaction between fibronectin and TN-C results in cellular upregulation of matrix metalloproteinase expression.TN-C also interacts with one or more TN-C receptors on cells which activate and repress the same signal transduction pathway. An example of this interaction is the adhesion of SW80 carcinoma cells to the third FN-III repeat of TN-C via the αvβ3 integrin receptor leads to cell spreading, phosphorylation of focal adhesion kinase, paxillin and ERK2 MAPK, and proliferation. In contrast, when these same cells use either α9β1 or αvβ6 integrins to adhere to the same third FN type III repeat, cell spreading is attenuated and activation of these signaling mediators and cell growth is suppressed or fails to occur.
Function:
Tenascin C is a very diverse protein that can produce different functions within the same cell type. These myriad functions are accomplished through alternative splicing of mRNA as well as the temporal activation of signal transduction pathways and/or target genes at different stages of growth or differentiation. TN-C is classified as an adhesion-modulating protein, because it has been found to inhibit cellular adhesion to fibronectin.Much of the functional studies are inferred from various TN-C knockout mice models. TN-C clearly plays a role in cell signaling as evidenced by its ability to be induced during events such as trauma, inflammation, or cancer development. Also, TN-C is important in regulating cell proliferation and migration, especially during developmental differentiation and wound healing.
Clinical significance:
Tenascin C continues to be researched as a potential biomarker for a number of diseases such as myocarditis and different forms of cancer. The numerous involvements with cellular functioning and signaling make TN-C a popular protein to study in developing new therapies and detection methods. Recent work has shown that TN-C inhibits HIV infection in immune cells by binding to a chemokine coreceptor site on the HIV-1 envelope protein, blocking the virus' entry into the host cells.
Clinical significance:
Role in cancer Tenascin C is implicated in a number of different cancers such as osteosarcomas, chondrosarcomas, bladder cancer, and glioblastomas. In glioblastoma cells, Tenascin-C expression provides much clinical and functional significance in terms of cancer prognosis and tumor progression. The endogenous pool of tenascin-C isoforms in gliomas supports both tumor cell proliferation and migration. Because tenascin-C is essential to the survival of these various forms of cancers, tenascin-c expression could be a potential biomarker for cancer detection. Also, tenascin-C antibodies have been used to diagnose and create therapies for many different types of cancers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Weather routing**
Weather routing:
Weather routing is a commercial service provided by commercial companies for cargo ships, to optimize their voyage performance. An adventure version of the same used for sailing boats is referred to as sailing weather prediction or sailing weather routing. The latter focusses more on the forecasting and routing of wind and currents for adventurers and competitive sailors participating in ocean sports like yacht races.
For maritime commercial usage:
A number of large cargo ships use weather routing services for ocean passages. They are primarily geared towards protecting owners and charterers from speed claims, a secondary use being to reduce fuel consumption and improve ETAs (estimated time of arrival). Weather routing companies include; SOFAR Ocean by providing real-time data via a network of spotter buoys laid across the ocean), Blue Water Optimum Speed Services (BOSS), Ocean routes, SPOS, Storm Geo AWT(formerly Applied Weather technology), WNI and WRI.
For maritime commercial usage:
Promoters of weather routing companies cite high fuel savings due to their use, while many mariners tend to be sceptic of their advantages due to a few number of maritime accidents (such as the sinking of the Derbyshire in 1980, and the parametric rolling of APL China in 1998) and cargo damages that continue to occur even when vessels follow routing advice. A few routing programs employ the Dijkstra algorithm and do not consider the different responses of each ship to the same weather, as the latter is difficult to estimate.
For sailing:
Weather forecasting for sailing involves several activities such as weather training and coaching, dissemination of data for use in navigation and route planning software, race modeling which involves historical weather and sea state analysis for yacht and sail design, trip and adventure planning for distance races and record attempts, monitoring for departure and trip weather windows. It involves several type of events such as day races, long-distance races, around-the-world-races, and record attempts. It is routinely used in races such as Volvo Ocean Race, America's Cup campaigns, and olympic classes regattas.
For sailing:
Long-distance sailboat races Weather forecasting for long-distance races is based on dissemination of meteorological data, most often in GRIB format, for use in navigation and route planning software and yacht characteristics (polars), providing guidance, as well as analysis of historical weather and sea state data.
For sailing:
Data GRIB (GRIdded Binary) is a concise data format commonly used in meteorology to disseminate forecast weather data. For sailing purposes the GRIBs are transmitted and received at sea. These GRIBs contain only small subset of surface data, usually winds (direction and wind speed), information about wave strength (proportional to significant wave height) and direction, surface pressure. The data is further reduced by providing its subset around the position of a yacht. The data is transmitted over satellite phones and single side band radios.
For sailing:
Software Modern sailing weather forecasting involves transmission of weather forecasts which are used in on-board software which simulates optimal (and safest) routing in distance races. The data is often transmitted in form of GRIB files or similar which are customized for specific areas. These files are suitable for use in popular routing and tactical racing software.
For sailing:
Olympic sailing Weather forecasting for olympic class sailing is a form of nowcasting predicting weather and currents in approximately 0–6 hours timeframe. Even though understanding of synoptic weather conditions is of importance but mesoscale and local scale events take precedence. The forecast includes predictions of the sea-breeze onset, turbulent winds shifts, coastal jets, changes in tidal currents, fog, as well as wind acceleration and directional changes associated with clouds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automatism (medicine)**
Automatism (medicine):
Automatism is a set of brief unconscious behaviors, typically at least several seconds or minutes, while the subject is unaware of actions. This type of automatic behavior often occurs in certain types of epilepsy, such as complex partial seizures in those with temporal lobe epilepsy, or as a side effect of particular medications such as zolpidem.
Variations:
Varying degrees of automatism may include simple gestures, such as finger rubbing, lip smacking, chewing, or swallowing, or more complex actions, such as sleepwalking behaviors. Others may include speech, which may or may not be coherent or sensible. The subject may or may not remain conscious otherwise throughout the episode. Conscious subjects may be fully aware of their other actions at the time, but unaware of their automatism.
Variations:
In some more complex automatisms, the subject enters into the behaviors of sleepwalking while fully awake until it starts. In these episodes, which can last for longer periods of time, the subject proceeds to engage in routine activities such as cooking, showering, driving a familiar route, or even conversation. Following the episode, the subject regains consciousness, often feeling disoriented, and has no memory of the incident. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jervine**
Jervine:
Jervine is a steroidal alkaloid with molecular formula C27H39NO3 which is derived from the plant genus Veratrum. Similar to cyclopamine, which also occurs in the genus Veratrum, it is a teratogen implicated in birth defects when consumed by animals during a certain period of their gestation.
Physiological effects:
Jervine is a potent teratogen causing birth defects in vertebrates. In severe cases it can cause cyclopia and holoprosencephaly.
Mechanism of action:
Jervine's biological activity is mediated via its interaction with the 7 pass trans membrane protein smoothened. Jervine binds with and inhibits smoothened, which is an integral part of the hedgehog signaling pathways. With smoothened inhibited, the GLI1 transcription cannot be activated and hedgehog target genes cannot be transcribed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Core relational theme**
Core relational theme:
A core relational theme is the central or core meaning associated with a certain emotion. Core relational themes were introduced by Richard Lazarus, based on his appraisal approach to understanding emotion.
Appraisal theory:
Appraisal theory examines the situational factors that produce emotional reactions. According to the appraisal approach of emotion, in order to understand a certain emotion, it is necessary to understand the relational meaning that has induced it, and how that meaning was formed. Emotions are reactions to the fate of active goals in everyday life and are driven by cognitive events. Events can pertain to a variety of cognitive concepts, such as, important ideas, moral values, issues with the self and identity, social esteem, or other people and their well-being. The relationship between specific cognitions and individual emotions has stimulated a wide body of research on appraisal and emotion
Richard S. Lazarus:
Richard S. Lazarus emphasizes the relationship between the person and the environment as important contributing factors in emotion and adaptation. According to Lazarus, the person-environment relationship is the arena of the emotions and the adaptational encounter is the basis for analysis. Lazarus also emphasizes the important role of motivation in a person's appraisal of a certain event as consisting of harms or benefits, real or imagined. Lazarus identifies four important implications that we can learn from observing emotional reactions in others in connection to the appraisal process.
Richard S. Lazarus:
First, the quality and intensity of a certain emotion can inform us about ongoing relationships between persons and their environments, which Lazarus calls, “core relational themes.”Second, emotions can tell us about what is or is not important to an individual in a certain encounter because we are not emotionally moved by unimportant events.Third, we can discover a great deal about a person's beliefs about the self and the world by observing how a person appraises relationships with the environments and the emotions that this results in.Fourth, an emotion can show us how a person has appraised or evaluated an event in relation to its significance for personal well-being.Lazarus defines appraisal theory of emotion as having two basic themes: “First, emotion is a response to evaluative judgments or meaning; second, these judgments are about ongoing relationships with the environment, namely how one is doing in the agenda of living and whether the encounter of the environment is one of harm of benefit.” According to Lazarus, the appraisal process involves a set of decision-making components, which create evaluative patterns that differ among each of the emotions. Lazarus proposed two stages to the appraisal process: the primary appraisal stage and the secondary appraisal stage. There are three primary components and three secondary components that combine in different ways to represent each emotion.
Primary appraisal:
The three components of primary appraisal are goal relevance, goal congruence, and type of ego-involvement. In the primary appraisal stage, an individual first evaluates an event in terms of personal goal relevance If an event is deemed relevant to an individual's personal goals, an emotion is generated; if not, an emotion will not ensue. Then the individual appraises ongoing events to the extent that the event is congruent or incongruent with the individual's goals. If the goal is congruent, the consequent event will be evaluated as positive. If the goal is incongruent, then negative emotions will be elicited. The specific emotion experienced by the individual depends on the secondary appraisal(s) linked to the primary appraisal
Secondary appraisal:
The secondary appraisal stage deals with coping options in which the individual considers a causal attribution for the event, ways to respond, and future consequences of different plans of action. The three components of secondary appraisal are accountability (blame or credit), coping potential (problem-focused or emotion-focused), and future expectations.Lazarus termed the result of these combined processes as the core relational themes of the emotion.
Core relational themes of emotion:
A core relational theme provides a convenient summary for the relational harm or benefit that underlies each specific kind of emotion. Each emotion or emotion family is defined by a core relational theme. When the implications for individual well-being are appraised by a person, an action impulse, that is consistent with the core relational theme and the emotion that flows from it, is produced.
Appraisals for anger:
Primary Appraisal Components 1. If there is goal relevance, then any emotion is possible, including anger. If not, no emotion.
2. If there is goal incongruence, then only negative emotions are possible, including anger.
3. If the type of ego-involvement engaged is to preserve or enhance the self-or social-esteem aspect of one's ego-identity, then the emotion possibilities include anger, anxiety, and pride.
Secondary Appraisal Components 4. If there is blame, which derives from the knowledge that someone is accountable for the harmful actions, and they could have been controlled, then anger occurs. If the blame is to another, then anger is directed externally; if to oneself, the anger is directed internally.
_______________________________________________________________________________________________________________________________________ 5. If coping potential favors attack as viable, then anger is facilitated.
6. If future expectancy is positive about the environmental response to attack, then anger is facilitated.
_______________________________________________________________________________________________________________________________________ Appraisal components sufficient and necessary for anger are 1 through 4
Appraisals for love:
Primary Appraisal Components 1. If there is goal relevance, then any emotion is possible, including love.
2. If there is goal congruence, then only positive emotions are possible, including love.
3. If the type of ego-involvement is desire for mutual appreciation, which is affirming to our ego-identity, then the emotion possibilities narrow to love (or at least liking); if to this is added sexual interest or passion, then love is romantic rather than companionate.
_______________________________________________________________________________________________________________________________________ No secondary appraisal components are involved, except perhaps future expectation, which when positive favors love but when negative (that is, the other does not reciprocate) prevents or undermines love.
_______________________________________________________________________________________________________________________________________ Appraisal components sufficient and necessary for love are 1, 2, and 3.
Appraisals are not the same for companionate and romantic love except for the role of sexual passion, though it can be absent in romantic love, for one reason or another' | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Schnuerle porting**
Schnuerle porting:
Schnuerle porting is a system to improve efficiency of a valveless two-stroke engine by giving better scavenging. The intake and exhaust ports cut in the cylinder wall are shaped to give a more efficient transfer of intake and exhaust gases.
Description:
Gas flow within the two-stroke engine is even more critical than for a four-stroke engine, as the two flows are both entering and leaving the combustion chamber simultaneously. A well-defined flow pattern is required, avoiding any turbulent mixing. The efficiency of the two-stroke engine depends on effective scavenging, the more complete replacement of the old spent charge with a fresh charge.
Description:
Apart from large diesels with separate superchargers, two-stroke engines are generally piston-ported and use their crankcase beneath the piston for the compression needed to fill the cylinder with fuel/air mixture. The cylinder has a transfer port (inlet from crankcase to cylinder) and an exhaust port cut into it. These are opened, as the piston moves downwards past them; with the higher exhaust port opening earlier as the piston descends; and closing later as the piston rises.
Description:
The simplest arrangement is a single transfer and single exhaust port, opposite each other. This "cross scavenging" performs poorly, as there is tendency for the flow to pass from the inlet directly to the exhaust, wasting some of the fuel mixture and also poorly scavenging the upper part of the chamber. Before Schnuerle porting, a deflector on top of the piston was used to direct the gas flow from the transfer port upwards, in a U-shaped loop around the combustion chamber roof and then down and out through the exhaust port. Apart from the gas flow never quite following this ideal path and tending to mix instead, this also gave a poorly shaped combustion chamber with long, thin flame paths.
Description:
In 1926, the German engineer Adolf Schnürle developed the system of ports that bears his name. The ports were relocated to both be on the same side of the cylinder, with the transfer port being split into two angled ports, one on either side of the exhaust port. A deflector piston was no longer required. The gas flow was now a circular loop, flowing in and across the piston crown from the transfer ports, up and around the combustion chamber and then out through the exhaust port.With Schnuerle porting, the piston crown may be of any shape, even bowl shaped. This permits a far better combustion chamber shape and flame path, giving better combustion, particularly at high speeds.
Loop scavenging:
As Schnuerle porting encourages flow in a loop, it is termed "loop scavenging".Historically, the deflector piston form of cross scavenging was termed "loop scavenging", after the supposed shape of the flow. Schnuerle flow was termed "reverse loop scavenging". As the first of these was realised to be inaccurate, the later form adopted the simpler name. These original terms are now obsolete and no longer used.
Adolf Schnürle:
The system is named after its inventor, Adolf Schnürle. Either "Schnürle" or the more common Anglicisation as "Schnuerle" are generally acceptable. It also appears as "Schnürrle", but "Schneurle" is a misspelling.
Adolf Schnürle was a prolific engineer and is named on many patent documents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C++/CLI**
C++/CLI:
C++/CLI is a variant of the C++ programming language, modified for Common Language Infrastructure. It has been part of Visual Studio 2005 and later, and provides interoperability with other .NET languages such as C#. Microsoft created C++/CLI to supersede Managed Extensions for C++. In December 2005, Ecma International published C++/CLI specifications as the ECMA-372 standard.
Syntax changes:
C++/CLI should be thought of as a language of its own (with a new set of keywords, for example), instead of the C++ superset-oriented Managed C++ (MC++) (whose non-standard keywords were styled like __gc or __value). Because of this, there are some major syntactic changes, especially related to the elimination of ambiguous identifiers and the addition of .NET-specific features.
Many conflicting syntaxes, such as the multiple versions of operator new() in MC++, have been split: in C++/CLI, .NET reference types are created with the new keyword gcnew (i.e. garbage collected new()). Also, C++/CLI has introduced the concept of generics from .NET (similar, for the most common purposes, to standard C++ templates, but quite different in their implementation).
Syntax changes:
Handles In MC++, there were two different types of pointers: __nogc pointers were normal C++ pointers, while __gc pointers worked on .NET reference types. In C++/CLI, however, the only type of pointer is the normal C++ pointer, while the .NET reference types are accessed through a "handle", with the new syntax ClassName^ (instead of ClassName*). This new construct is especially helpful when managed and standard C++ code is mixed; it clarifies which objects are under .NET automatic garbage collection and which objects the programmer must remember to explicitly destroy.
Syntax changes:
Tracking references A tracking reference in C++/CLI is a handle of a passed-by-reference variable. It is similar in concept to using "*&" (reference to a pointer) in standard C++, and (in function declarations) corresponds to the "ref" keyword applied to types in C#, or "ByRef" in Visual Basic .NET. C++/CLI uses a "^%" syntax to indicate a tracking reference to a handle.
Syntax changes:
The following code shows an example of the use of tracking references. Replacing the tracking reference with a regular handle variable would leave the resulting string array with 10 uninitialized string handles, as only copies of the string handles in the array would be set, due to them being passed by value rather than by reference.
Note that this would be illegal in C#, which does not allow foreach loops to pass values by reference. Hence, a workaround would be required.
Syntax changes:
Finalizers and automatic variables Another change in C++/CLI is the introduction of the finalizer syntax !ClassName(), a special type of nondeterministic destructor that is run as a part of the garbage collection routine. The C++ destructor syntax ~ClassName() also exists for managed objects, and better reflects the "traditional" C++ semantics of deterministic destruction (that is, destructors that can be called by user code with delete).
Syntax changes:
In the raw .NET paradigm, the nondeterministic destruction model overrides the protected Finalize method of the root Object class, while the deterministic model is implemented through the IDisposable interface method Dispose (which the C++/CLI compiler turns the destructor into). Objects from C# or VB.NET code that override the Dispose method can be disposed of manually in C++/CLI with delete just as .NET classes in C++/CLI can.
Operator overloading:
Operator overloading works analogously to standard C++. Every * becomes a ^, every & becomes an %, but the rest of the syntax is unchanged, except for an important addition: for .NET classes, operator overloading is possible not only for classes themselves, but also for references to those classes. This feature is necessary to give a ref class the semantics for operator overloading expected from .NET ref classes. (In reverse, this also means that for .NET framework ref classes, reference operator overloading often is implicitly implemented in C++/CLI.) For example, comparing two distinct String references (String^) via the operator == will give true whenever the two strings are equal. The operator overloading is static, however. Thus, casting to Object^ will remove the overloading semantics.
Interoperability:
C++/CLI allows C++ programs to consume C# programs in C# DLLs. Here the #using keyword shows the compiler where the DLL is located for its compilation metadata. This simple example requires no data marshalling.
The C# source code content of MyCS.dll.
This examples shows how strings are marshalled from C++ strings to strings callable from C# then back to C++ strings. String marshalling copies the string contents to forms usable in the different environments.
The C# code is not in any way C++-aware.
C++/C# interoperability allows C++ simplified access to the entire world of .NET features.
C++/CX:
C++/CX targeting WinRT, although it produces entirely unmanaged code, borrows the ref and ^ syntax for the reference-counted components of WinRT, which are similar to COM "objects". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ternary fission**
Ternary fission:
Ternary fission is a comparatively rare (0.2 to 0.4% of events) type of nuclear fission in which three charged products are produced rather than two. As in other nuclear fission processes, other uncharged particles such as multiple neutrons and gamma rays are produced in ternary fission. Ternary fission may happen during neutron-induced fission or in spontaneous fission (the type of radioactive decay). About 25% more ternary fission happens in spontaneous fission compared to the same fission system formed after thermal neutron capture, illustrating that these processes remain physically slightly different, even after the absorption of the neutron, possibly because of the extra energy present in the nuclear reaction system of thermal neutron-induced fission.
Ternary fission:
Quaternary fission, at 1 per 10 million fissions, is also known (see below).
Products:
The most common nuclear fission process is "binary fission." It produces two charged asymmetrical fission products with maximally probable charged product at 95±15 and 135±15 u atomic mass. However, in this conventional fission of large nuclei, the binary process happens merely because it is the most energetically probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, the alternative ternary fission process produces three positively charged fragments (plus neutrons, which are not charged and not counted in this reckoning). The smallest of the charged products may range from so small a charge and mass as a single proton (Z=1), up to as large a fragment as the nucleus of argon (Z=18). Although particles as large as argon nuclei may be produced as the smaller (third) charged product in the usual ternary fission, the most common small fragments from ternary fission are helium-4 nuclei, which make up about 90% of the small fragment products. This high incidence is related to the stability (high binding energy) of the alpha particle, which makes more energy available to the reaction. The second-most common particles produced in ternary fission are Tritons (the nuclei of tritium), which make up 7% of the total small fragments, and the third-most are helium-6 nuclei (which decay in about 0.8 seconds to lithium-6). Protons and larger nuclei are in the small fraction (< 2%) which make up the remainder of the small charged products. The two larger charged particles from ternary fission, particularly when alphas are produced, are quite similar in size distribution to those produced in binary fission.
Product energies:
The energy of the third much-smaller product usually ranges between 10 and 20 MeV. In keeping with their origin, alpha particles produced by ternary fission typically have mean energies of about ~ 16 MeV (energies this great are never seen in alpha decay). Since these typically have significantly more energy than the ~ 5 MeV alpha particles from alpha decay, they are accordingly called "long range alphas" (referring to their longer range in air or other media).
Product energies:
The other two larger fragments carry away, in their kinetic energies, the remainder of the fission kinetic energy (typically totalling ~ 170 MeV in heavy element fission) that does not appear as the 10 to 20 MeV kinetic energy carried away by the third smaller product. Thus, the larger fragments in ternary fission are each less energetic, by a typical 5 to 10 MeV, than they are seen to be in binary fission.
Importance:
Although the ternary fission process is less common than the binary process, it still produces significant helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. This phenomenon was initially detected in 1957, within the environs of the Savannah River National Laboratory.
True ternary fission:
A very rare type of ternary fission process is sometimes called "true ternary fission." It produces three nearly equal-sized charged fragments (Z ~ 30) but only happens in about 1 in 100 million fission events. In this type of fission, the product nuclei split the fission energy in three nearly equal parts and have kinetic energies of ~ 60 MeV. True ternary fission has so far only been observed in nuclei bombarded by heavy, high energy ions.
Quaternary fission:
Another rare fission process, occurring in about 1 in 10 million fissions, is Quaternary fission. It is analogous to ternary fission, save that four charged products are seen. Typically two of these are light particles, with the most common mode of Quaternary fission apparently being two large particles and two alpha particles (rather than one alpha, the most common mode of ternary fission). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Karl Johan Åström**
Karl Johan Åström:
Karl Johan Åström (born August 5, 1934) is a Swedish control theorist, who has made contributions to the fields of control theory and control engineering, computer control and adaptive control. In 1965, he described a general framework of Markov decision processes with incomplete information, what ultimately led to the notion of a Partially observable Markov decision process.
In 1995, Åström was elected as a member into the National Academy of Engineering for contributions to identification, stochastic, and adaptive control and their incorporation in control engineering practice.
Biography:
Åström was born in Östersund, Sweden, and received his M.Sc. in Engineering Physics (1957) and PhD in Automatic Control and Mathematics (1960) from the Royal Institute of Technology (KTH) in Stockholm, where he also taught from 1955 to 1960 while working on inertial guidance for the Swedish National Defence Research Institute.
Biography:
In 1961 Åström joined the IBM Nordic Laboratory to work on computerized process control, with tours at IBM Research in Yorktown Heights, New York (1962) and San Jose, California (1963). After his return he led efforts in the computer control of paper manufacturing machinery. In 1965, Åström was named chair of the newly founded Department of Automatic Control at Lund University, Sweden.
Biography:
From 1965 to 1999 he was chair of the Department of Automatic Control at Lund University, where he is now professor emeritus. Since 2002 he has been distinguished visiting professor at the University of California, Santa Barbara.
Biography:
Åström is a Fellow of the IEEE, member of the Royal Swedish Academy of Sciences, vice president of the Royal Swedish Academy of Engineering Sciences (IVA), and a foreign associate of the US National Academy of Engineering. He was awarded the ASME Rufus Oldenburger medal (1985) and the International Federation of Automatic Control Quazza Medal (1987). In 1987 he was also awarded the degree Docteur Honoris Causa from l'Institut National Polytechnique de Grenoble. He received in 1989 the IEEE Donald G. Fink Prize Paper Award, in 1990 the IEEE Control Systems Science and Engineering Award, and in 1993 the IEEE Medal of Honor for his "fundamental contributions to theory and applications of adaptive control technology".
Publications:
Books 1970. Introduction to Stochastic Control. Academic Press, 1970; Dover, 2006.
1989. Adaptive Control. With B Wittenmark. Addison-Wesley, 1989.
1996. Computer-controlled Systems, Theory and Design. With B Wittenmark. Prentice Hall, 1996; Dover, 2011. (IFAC Textbook award for first edition, 1993) 2005. Advanced PID Control. With T Hägglund. ISA, 2005.
2008. Feedback Systems: An Introduction for Scientists and Engineers. With R. Murray. Princeton University Press, 2008. (IFAC Textbook award, 2011)Papers KJ Åström, B Wittenmark. "On self-tuning regulators," Automatica, vol. 9, pp. 185–199, 1973. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**No-hair theorem**
No-hair theorem:
The no-hair theorem states that all stationary black hole solutions of the Einstein–Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three independent externally observable classical parameters: mass, electric charge, and angular momentum. Other characteristics (such as geometry and magnetic moment) are uniquely determined by these three parameters, and all other information (for which "hair" is a metaphor) about the matter that formed a black hole or is falling into it "disappears" behind the black-hole event horizon and is therefore permanently inaccessible to external observers after the black hole "settles down" (by emitting gravitational and electromagnetic waves). Physicist John Archibald Wheeler expressed this idea with the phrase "black holes have no hair", which was the origin of the name.
No-hair theorem:
In a later interview, Wheeler said that Jacob Bekenstein coined this phrase.
No-hair theorem:
Richard Feynman objected to the phrase that seemed to me to best symbolize the finding of one of the graduate students: graduate student Jacob Bekenstein had shown that a black hole reveals nothing outside it of what went in, in the way of spinning electric particles. It might show electric charge, yes; mass, yes; but no other features – or as he put it, "A black hole has no hair". Richard Feynman thought that was an obscene phrase and he didn't want to use it. But that is a phrase now often used to state this feature of black holes, that they don't indicate any other properties other than a charge and angular momentum and mass.
No-hair theorem:
The first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was quickly generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, and mathematicians refer to it as the no-hair conjecture. Even in the case of gravity alone (i.e., zero electric fields), the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the additional hypothesis of non-degenerate event horizons and the technical, restrictive and difficult-to-justify assumption of real analyticity of the space-time continuum.
Example:
Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole was made by collapsing ordinary matter whereas the second was made out of antimatter; nevertheless, then the conjecture states they will be completely indistinguishable to an observer outside the event horizon. None of the special particle physics pseudo-charges (i.e., the global charges baryonic number, leptonic number, etc., all of which would be different for the originating masses of matter that created the black holes) are conserved in the black hole, or if they are conserved somehow then their values would be unobservable from the outside.
Changing the reference frame:
Every isolated unstable black hole decays rapidly to a stable black hole; and (excepting quantum fluctuations) stable black holes can be completely described (in a Cartesian coordinate system) at any moment in time by these eleven numbers: mass–energy M linear momentum P (three components), angular momentum J (three components), position X (three components), electric charge Q .These numbers represent the conserved attributes of an object which can be determined from a distance by examining its gravitational and electromagnetic fields. All other variations in the black hole will either escape to infinity or be swallowed up by the black hole.
Changing the reference frame:
By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive z axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge. Thus any black hole that has been isolated for a significant period of time can be described by the Kerr–Newman metric in an appropriately chosen reference frame.
Extensions:
The no-hair theorem was originally formulated for black holes within the context of a four-dimensional spacetime, obeying the Einstein field equation of general relativity with zero cosmological constant, in the presence of electromagnetic fields, or optionally other fields such as scalar fields and massive vector fields (Proca fields, etc.).It has since been extended to include the case where the cosmological constant is positive (which recent observations are tending to support).Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a classical black hole.
Counterexamples:
Counterexamples in which the theorem fails are known in spacetime dimensions higher than four; in the presence of non-abelian Yang–Mills fields, non-abelian Proca fields, some non-minimally coupled scalar fields, or skyrmions; or in some theories of gravity other than Einstein's general relativity. However, these exceptions are often unstable solutions and/or do not lead to conserved quantum numbers so that "The 'spirit' of the no-hair conjecture, however, seems to be maintained". It has been proposed that "hairy" black holes may be considered to be bound states of hairless black holes and solitons.
Counterexamples:
In 2004, the exact analytical solution of a (3+1)-dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. This showed that, apart from mass, electrical charge and angular momentum, black holes can carry a finite scalar charge which might be a result of interaction with cosmological scalar fields such as the inflaton. The solution is stable and does not possess any unphysical properties; however, the existence of a scalar field with the desired properties is only speculative.
Observational results:
The LIGO results provide some experimental evidence consistent with the uniqueness of the no-hair theorem. This observation is consistent with Stephen Hawking's theoretical work on black holes in the 1970s.
Soft hair:
A study by Sasha Haco, Stephen Hawking, Malcolm Perry and Andrew Strominger postulates that black holes might contain "soft hair", giving the black hole more degrees of freedom than previously thought. This hair permeates at a very low-energy state, which is why it didn't come up in previous calculations that postulated the no-hair theorem. This was the subject of Hawking's final paper which was published posthumously. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Algebraic Riccati equation**
Algebraic Riccati equation:
An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.
A typical algebraic Riccati equation is similar to one of the following: the continuous time algebraic Riccati equation (CARE): ATP+PA−PBR−1BTP+Q=0 or the discrete time algebraic Riccati equation (DARE): P=ATPA−(ATPB)(R+BTPB)−1(BTPA)+Q.
P is the unknown n by n symmetric matrix and A, B, Q, R are known real coefficient matrices.
Though generally this equation can have many solutions, it is usually specified that we want to obtain the unique stabilizing solution, if such a solution exists.
Origin of the name:
The name Riccati is given to these equations because of their relation to the Riccati differential equation. Indeed, the CARE is verified by the time invariant solutions of the associated matrix valued Riccati differential equation. As for the DARE, it is verified by the time invariant solutions of the matrix valued Riccati difference equation (which is the analogue of the Riccati differential equation in the context of discrete time LQR).
Context of the discrete-time algebraic Riccati equation:
In infinite-horizon optimal control problems, one cares about the value of some variable of interest arbitrarily far into the future, and one must optimally choose a value of a controlled variable right now, knowing that one will also behave optimally at all times in the future. The optimal current values of the problem's control variables at any time can be found using the solution of the Riccati equation and the current observations on evolving state variables. With multiple state variables and multiple control variables, the Riccati equation will be a matrix equation.
Context of the discrete-time algebraic Riccati equation:
The algebraic Riccati equation determines the solution of the infinite-horizon time-invariant Linear-Quadratic Regulator problem (LQR) as well as that of the infinite horizon time-invariant Linear-Quadratic-Gaussian control problem (LQG). These are two of the most fundamental problems in control theory.
Context of the discrete-time algebraic Riccati equation:
A typical specification of the discrete-time linear quadratic control problem is to minimize ∑t=1T(ytTQyt+utTRut) subject to the state equation yt=Ayt−1+But−1, where y is an n × 1 vector of state variables, u is a k × 1 vector of control variables, A is the n × n state transition matrix, B is the n × k matrix of control multipliers, Q (n × n) is a symmetric positive semi-definite state cost matrix, and R (k × k) is a symmetric positive definite control cost matrix.
Context of the discrete-time algebraic Riccati equation:
Induction backwards in time can be used to obtain the optimal control solution at each time, ut∗=−(BTPtB+R)−1(BTPtA)yt−1, with the symmetric positive definite cost-to-go matrix P evolving backwards in time from PT=Q according to Pt−1=Q+ATPtA−ATPtB(BTPtB+R)−1BTPtA, which is known as the discrete-time dynamic Riccati equation of this problem. The steady-state characterization of P, relevant for the infinite-horizon problem in which T goes to infinity, can be found by iterating the dynamic equation repeatedly until it converges; then P is characterized by removing the time subscripts from the dynamic equation.
Solution:
Usually solvers try to find the unique stabilizing solution, if such a solution exists. A solution is stabilizing if using it for controlling the associated LQR system makes the closed loop system stable.
For the CARE, the control is K=R−1BTP and the closed loop state transfer matrix is A−BK=A−BR−1BTP which is stable if and only if all of its eigenvalues have strictly negative real part.
For the DARE, the control is K=(R+BTPB)−1BTPA and the closed loop state transfer matrix is A−BK=A−B(R+BTPB)−1BTPA which is stable if and only if all of its eigenvalues are strictly inside the unit circle of the complex plane.
Solution:
A solution to the algebraic Riccati equation can be obtained by matrix factorizations or by iterating on the Riccati equation. One type of iteration can be obtained in the discrete time case by using the dynamic Riccati equation that arises in the finite-horizon problem: in the latter type of problem each iteration of the value of the matrix is relevant for optimal choice at each period that is a finite distance in time from a final time period, and if it is iterated infinitely far back in time it converges to the specific matrix that is relevant for optimal choice an infinite length of time prior to a final period—that is, for when there is an infinite horizon.
Solution:
It is also possible to find the solution by finding the eigendecomposition of a larger system. For the CARE, we define the Hamiltonian matrix Z=(A−BR−1BT−Q−AT) Since Z is Hamiltonian, if it does not have any eigenvalues on the imaginary axis, then exactly half of its eigenvalues have a negative real part. If we denote the 2n×n matrix whose columns form a basis of the corresponding subspace, in block-matrix notation, as (U1,1U2,1) then P=U2,1U1,1−1 is a solution of the Riccati equation; furthermore, the eigenvalues of A−BR−1BTP are the eigenvalues of Z with negative real part.
Solution:
For the DARE, when A is invertible, we define the symplectic matrix Z=(A+BR−1BT(A−1)TQ−BR−1BT(A−1)T−(A−1)TQ(A−1)T) Since Z is symplectic, if it does not have any eigenvalues on the unit circle, then exactly half of its eigenvalues are inside the unit circle. If we denote the 2n×n matrix whose columns form a basis of the corresponding subspace, in block-matrix notation, as (U1,1U2,1) where U1,1 and U2,1 result from the decomposition Z=(U1,1U1,2U2,1U2,2)(Λ1,1Λ1,20Λ2,2)(U1,1TU2,1TU1,2TU2,2T) then P=U2,1U1,1−1 is a solution of the Riccati equation; furthermore, the eigenvalues of A−B(R+BTPB)−1BTPA are the eigenvalues of Z which are inside the unit circle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Disconnection syndrome**
Disconnection syndrome:
Disconnection syndrome is a general term for a collection of neurological symptoms caused – via lesions to associational or commissural nerve fibres – by damage to the white matter axons of communication pathways in the cerebrum (not to be confused with the cerebellum), independent of any lesions to the cortex. The behavioral effects of such disconnections are relatively predictable in adults. Disconnection syndromes usually reflect circumstances where regions A and B still have their functional specializations except in domains that depend on the interconnections between the two regions.Callosal syndrome, or split-brain, is an example of a disconnection syndrome from damage to the corpus callosum between the two hemispheres of the brain. Disconnection syndrome can also lead to aphasia, left-sided apraxia, and tactile aphasia, among other symptoms. Other types of disconnection syndrome include conduction aphasia (lesion of the association tract connecting Broca’s area and Wernicke’s), agnosia, apraxia, pure alexia, etc.
Anatomy of cerebral connections:
Theodore Meynert, a neuroanatomist of the late 1800s, developed a detailed anatomy of white matter pathways. He classified the white matter fibers that connect the neocortex into three important categories – projection fibers, commissural fibers and association fibers. Projection fibers are the ascending and descending pathways to and from the neocortex. Commissural fibers are responsible for connecting the two hemispheres while the association fibers connect cortical regions within a hemisphere. These fibers make up the interhemispheric connections in the cortex.Callosal disconnection syndrome is characterized by left ideomotor apraxia and left-hand agraphia and/or tactile anomia, and is relatively rare.
Hemispheric disconnection:
Many studies have shown that disconnection syndromes such as aphasia, agnosia, apraxia, pure alexia and many others are not caused by direct damage to functional neocortical regions. They can also be present on only one side of the body which is why these are categorized as hemispheric disconnections. The cause for hemispheric disconnection is if the interhemispheric fibers, as mentioned earlier, are cut or reduced.An example is commissural disconnect in adults which usually results from surgical intervention, tumor, or interruption of the blood supply to the corpus callosum or the immediately adjacent structures. Callosal disconnection syndrome is characterized by left ideomotor apraxia and left-hand agraphia and/or tactile anomia, and is relatively rare.Other examples include commissurotomy, the surgical cutting of cerebral commissures to treat epilepsy and callosal agenesis which is when individuals are born without a corpus callosum. Those with callosal agenesis can still perform interhemispheric comparisons of visual and tactile information but with deficits in processing complex information when performing the respective tasks.
Sensorimotor disconnection:
Hemispheric disconnection has impacted behaviors relating to the sensory and motor systems. The different systems affected are listed below: Olfaction – The olfactory system is not crossed across hemispheres like the other senses, which means that left input goes to the left hemisphere and right input goes to the right hemisphere. Fibers in the anterior commissure control the olfactory regions in each hemisphere. A patient who lacks an anterior commissure cannot name odors entering the right nostril or use the right hand to pick up the object corresponding to the odor because the left hemisphere, responsible for language and controlling the right hand, is disconnected from the sensory information.
Sensorimotor disconnection:
Vision – Information from one visual field travels to the contralateral hemisphere. Therefore, with a commissurotomy patient, visual information presented in the left visual field travelling to the right hemisphere would be disconnected from verbal output since the left hemisphere is responsible for speech.
Somatosensory – If the two hemispheres are disconnected, the somatosensory functions of the left and right parts of the body become independent. For example, when something is placed on the left hand of a blindfolded patient with the two hemispheres disconnected, the left hand can pick the correct object within a set of objects but the right hand cannot.
Sensorimotor disconnection:
Audition – Though most of the input from one ear would go through the same ear, the opposite ear also receives some input. Therefore, the disconnection effects seems to be reduced in audition compared to the other systems. However, studies have shown that when the hemispheres are disconnected, the individual does not hear anything from the left and only hears from the right.
Sensorimotor disconnection:
Movement – Apraxia and agraphia may occur where responding to any verbal instructions by movement or writing in the left hand is inhibited because the left hand cannot receive these instructions from the right hemisphere,
History:
The concept of disconnection syndrome emerged in the late nineteenth century when scientists became aware that certain neurological disorders result from communication problems among brain areas. In 1874, Carl Wernicke introduced this concept in his dissertation when he suggested that conduction aphasia could result from the disconnection of the sensory speech zone from the motor speech area by a single lesion in the left hemisphere to the arcuate fasciculus. As the father of the disconnection theory, Wernicke believed that instead of being localized in specific regions of the brain, higher functions resulted from associative connections between the motor and sensory memory areas.
History:
Lissauer, a pupil of Wernicke, described a case of visual agnosia as a disconnection between the visual and language areas.Dejerine in 1892 described specific symptoms resulting from a lesion to the corpus callosum that caused alexia without agraphia. The patient had a lesion in the left occipital lobe, blocking sight in the right visual field (hemianopia), and in the splenium of the corpus callosum. Dejerine interpreted this case as a disconnection of the speech area in the left hemisphere from the right visual cortex.
History:
In 1965, Norman Geschwind, an American neurologist, wrote ‘Disconnexion syndromes in animals and man’ where he described a disconnectionist framework that revolutionized neurosciences and clinical neurology. Studies of the monkey brain led to his theory that disconnection syndromes were higher function deficits. Building on Wernicke and previously mentioned psychologists’ idea that disconnection syndromes involved white matter lesion to association tracts connecting two regions of the brain, Geschwind was more detailed in explaining some disconnection syndromes as lesions of the association cortex itself, specifically in the parietal lobe. He described the callosal syndrome, an example of a disconnection syndrome, which is a lesion in the corpus callosum that leads to tactile anomia in just the patient’s left hand.Though Geschwind made significant advances in describing disconnection syndromes, he was not completely accurate. He didn’t think the association cortex had any specialized role of its own besides acting as a relay station between the primary sensory and motor areas. However, in the 1960s and 1970s, Mesulam and Damasio incorporated specific functional roles for the association cortex. With Mesulam and Damasio’s contributions, Geschwind’s model has evolved over the past 50 years to include connections between brain regions as well as specializations of association cortices.More recently, neurologists have been using imaging techniques such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) to visualize association pathways in the human brain to advance the future of this disconnection theme. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Viewable impression**
Viewable impression:
In the online advertising industry, a viewable impression is a measure of whether a given advert was actually seen by a human being, as opposed to being out of view or served as the result of automated activity. The viewable impression guidelines are administered by the Media Rating Council and require that a minimum of 50% of the pixels in the advertisement were in an in-focus tab on the viewable space of the browser page for at least one continuous second.
Viewable impression:
The first system to deliver reports based on a viewable impression metric for standard IAB (Interactive Advertising Bureau) Display ad units, called RealVu, was developed by Rich Media Worldwide and accredited by the Media Rating Council on March 9, 2010. Other companies to offer viewable impressions include DMA-Institute OnScroll, C3 Metrics, Comscore, and AdYapper, while MSNBC utilizes ServeView, a proprietary system in use since 2010.
Viewable impression:
The definition of a viewable impression may depend on the type of the ad units and the reporting system. For example, a viewable impression for ads of pre-defined size delivered to pre-defined space on the content page is registered by RealVu when the ad content is loaded, rendered, and at least 60% of the ad surface area is within the visible area of a viewer's browser window on an in focus web page for at least one second. Click-through is enabled at the moment of the "viewable impression".
Viewable impression:
Viewable impressions were developed as an improvement of the online impression metrics measured by first ad servers developed in the mid-1990s, which analyze HTTP requests in a server log and cannot provide information on events fired by a viewer’s browser; thus, they cannot measure whether ad content was actually visible to a viewer.
Overview:
With the development of the first ad servers in 1995–1996 the assumption was that a requested ad was always available to the viewer of a requested web page. This allowed for the utilization of the server log file for collection of metadata to deliver a metric called the Online Impression that in traditional media meant an impression on a viewer.
Overview:
This type of advertising metric was meant to resemble Television and print advertising methods for speculating the cost of an advertisement, with the promise of even more accuracy due to the interactive nature of the Internet eliminating the need for industry-accepted approximates such as Nielsen ratings for television and circulation figures for print publications.
Overview:
The value of an ad traditionally was based upon an estimate of how many different people saw or heard the ad. The following are current accepted means of calculating CPM for different mediums: 1. CPM for print media (when audience data is available): 2. CPM for broadcast media: With the advent of the Internet, through log file server collecting data it was believed that ad views could be tracked with unprecedented accuracy and “number of different prospects reached” was removed from the equation, and a new CPM equation was created for the internet: 3. CPM for the Internet: However the assumption that an ad requested from an ad server is always visible when the viewer is on the requested page was wrong because of a few technical reasons and the fact that the web page is usually longer than the height of a computer screen. Eventually it became apparent that a large number of ad impressions measured for CPM pricing actually never rendered in the visible area of a viewer’s browser screen.
CPMV (cost per thousand viewable ads):
Until 2010 it was very common for large publishers to charge for most of their advertising inventory on a CPM or CPT basis. A related term, effective cost per mille (eCPM), is used to measure the effectiveness of advertising inventory sold (by the publisher) via a CPC, CPA, or CPT basis.
CPMV (cost per thousand viewable ads):
Partially to avoid the limitations of server side impression methodology many models emerged that were based on direct response: CPC - Cost per click Through CPL - Cost per lead (lead usually meaning a free registration) CPS - Cost per sale dCPM - Dynamic CPM CPA – Cost per actionThe Viewable Impression approach enables online advertising effectiveness to be analyzed based on stopping power, branding ability and level of engagement – the three key elements that drive purchase consideration and, ultimately, sales. Having no reliable way of measuring actual viewership, web publishers are vulnerable to payment methods that are based on performance-based advertising such as cost per click and cost per transaction. Since the publisher has no control or input on the demand and ad creative quality of the advertised product, web publishers lose control of their yield, giving away significant inventory to ads that are not clicked.
CPMV (cost per thousand viewable ads):
With the arrival of the Viewable Impression model – Cost per Thousand Viewable ads has emerged, quoted in terms of CPMV. This model may eventually become the standard CPM as it measured at the same point (of the view) as television or print.
Architecture example:
Viewable Impression relies on web bugs (or 'tags') placed on the web pages or in the third-party ad servers that distribute ads on the website(s) content pages. These tags are placed on a web page and when rendered, employing a "Correlator" (a linear correlation control.) The ad space is then "marked up," an "ad request” (server log impression) is recorded, and the Correlator begins communicating with the web page, browser and ad unit (ad space) embedded in the webpage content. The Correlator can collect additional non-private information from the viewer’s browser, including the viewer’s operating system, browser type and version and a list of other ads that were previously rendered on the page to prevent duplication of ads on the content page. Once any portion of the ad unit (definable), on a viewer's in focus web page, hits the visible area of the browser window a request is sent to an ad content server to deliver an advertisement.Once the ad content is loaded and rendered an "Ad Rendered" is reported. The Correlator continues to monitor the ad space for each individual ad on the web page and its relation to the browser window dimensions, scrolling position and web page focus, considering if the viewer has scrolled the ad space in or out of the visible area of the browser window, minimized, tabbed away, or opened another browser or application window bringing the web page monitored out of focus or portion of the browser window with the ad space outside of the monitor screen. When 60%, (or other pre-defined area) of the ad content on a web page is within the visible area of the viewer's browser window for one second, a message is sent via Correlator and a "Viewable Impression" is reported. The Correlator code continues to monitor the web page focus and scrolling position, location of ad unit(s) and the visible area of the browser window, and communicates to the reporting server logging the “Time in View” for the ads being delivered on the webpage.
Implementation:
The complete viewer's environment is gathered by a client-side technology for every viewable impression reported and transmitted back to a server-side database.
Data for each view includes the viewers display resolution, the viewer's browser window dimensions, the dimensions of the web page the ad appeared on, the location of the ad on the page, and the scroll position at the time the viewable impression was recorded.
This data results in a visual representation of the viewer's environment of each viewable impression reported.
Then the position of the ad is calculated as is the area of the ad that shown on the screen.
Implementation:
Also, the view time of the ad is collected by the client (viewer) side engine considering whether the web page the ad resides on is "In focus." (“In-focus” is defined as when a Web page is the primary window open on a user's screen, unobstructed by any other application window. Web page focus can be affected by: minimizing the browser, opening or switching to another browser window or application, opening or switching to another browser window tab, or placing the curser on the browser address bar or other browser button).
Limitations of impression methodology:
Reasons why an impression may not appear to a viewer overcome: 1. The viewer clicks to another web page before the ad loads and renders; 2. The ad loads, but in an area of the web page that is not within the viewer's browser window dimensions and scrolling position; 3. The requests made by spiders, crawlers, web-directories, download managers, link checkers, proxy servers, web filtering tools, harvesters, spambots or ; (This bad bots issue may be addressed in part already by a standard ad server following IAB guidelines but more study needs to be done to assess whether all non human technology is identified by the current approaches and whether viewable impression technology can improve on those measures. Current assessments suggests improvement with viewable impression methodology); 4. The viewer has a particular type of ad blocker installed that could disrupt ad serving but still be initiate the count of an impression. (Some ad blockers block the ad call, some do not. More study should be done in this area); 5. The viewer does not have the proper plug-in to render interactive media installed; 6. The viewer opens a page in a mobile device that is not configured to show the ad content; 7. The viewer minimizes the browser; 8. The viewer opens another browser window or another application; 9. The viewer opens another browser tab; 10. The viewer switches focus to another browser or application; 11. The viewer moves the browser window so the ad is outside the display screen area; 12. The viewer has multiple home pages set so when the browser is opened, two pages open in two tabs, and an ad resides on the tab that is not in focus; 13. In the case of pre-roll video and video advertising, if the viewer minimizes the browser, tabs away from, or opens another application over the video while the advertisement is playing or moves the browser window so the video is outside the display screen area;Reasons why an impression may not appear to a viewer associated with fraud overcome: 15. The request was made by an (invisible to the viewer) web page re-direct; 16. The web publisher places multiple ad displays in layers over each other. The viewer then sees one ad, but impressions are reported for all layered ads; 17. The web publisher places an image or shape on a layer overlapping an ad; 18. An ad or beacon delivered in an invisible width="0" height="0" Iframe; 19. Mutilated (http poisoning) packets Impression fraud Limitations related to data analysis and distribution flow with impression methodology overcome: 20. Ad rotation visibility lottery. Not knowing which ads in rotation were in view for each ad selection means certain ads may never be visible making all the statistic data meaningless. (e.g. 3 Ads are in rotation). For rotation 1 the 1st is in view, 2nd is not, 3rd is in view. Rotation 2; 1st in view, 2nd not, 3rd is not, and in 3rd rotation the 2nd ad is again not in view. All ads are reported as impressions, reach, frequency and all other measurements, but ad 2 was never visible.
Limitations of impression methodology:
21. Reach and Frequency measured for ads that are not visible. An ad that is not visible did not reach anyone, making reach and frequency measurements meaningless.
22. Impressions that are not visible are included in click through rate, making click rate misleading.
23. Display of complete branding messages and contact information is prohibitive; If a click-through is necessary to measure advertising, adding complete branding messages and contact information in a display ad is prohibitive because then a click through to a website is not necessary.
24. All parties involved see different impressions reports that are impossible to reconcile.
25. Reporting latency. Because server log files must be batched, filtered, and transferred to a database for reporting, significant delays exist before reporting data is available.
26. No reported log or visual representation of each unique viewer's environment, including viewer's display resolution, and browser window area and scrolling position; 27. No reported log or visual representation of each web page URL an ad is delivered to in addition to the web page dimensions and placement location of the ad on the web page.
28. No data reported for the view time of each individual viewable impression.
29. CPM value dilution because of an unlimited supply of inventory.
30. Redundant ad delivery. (The delivery of the same ads on the same web page.) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Science of Science Tool (Sci2)**
Science of Science Tool (Sci2):
The Science of Science (Sci2) Tool is a modular toolset specifically designed for the study of science. It supports the temporal, geospatial, topical, and network analysis and visualization of datasets at the micro (individual), meso (local), and macro (global) levels. Users of the tool can: Access science datasets online or load their own.
Perform different types of analysis with the most effective algorithms available.
Use different visualizations to interactively explore and understand specific datasets.
Share datasets and algorithms across scientific boundaries.
Overview:
The Sci2 Tool is built on the Cyberinfrastructure Shell (CIShell), an open-source software framework for the easy integration and utilization of datasets, algorithms, tools, and computing resources developed by the Cyberinfrastructure for Network Science Center at Indiana University. CIShell is based on the OSGi R4 Specification and Equinox implementation. Sci2 usage is detailed in the Sci2 Manual and taught in the Information Visualization MOOC.
Details:
Sci2 hosts many tools to aid in every step of the data preparation, analysis, and visualization process. Listed are a few of the many Sci2 features: Supported data sources Data formats that are currently supported on the Sci2 platform: Bibtex (.bib) TreeML (.xml) CSV (*.csv) Edgelist (.edge) Endnote Export Format (.enw) GraphML (.xml or .graphml) ISI (*.isi) NSF csv format (.nsf) NSF format (.nsf) Pajek (*.net) Scopus format (*.scopus) XGMML (.xml) NWB (.nwb) Pajek Matrix (.mat) Functionality Loading Data: load a supported file format for preparation, analysis, or visualization.
Details:
Data Preparation: extract networks from raw data or update currently existing networks by merging nodes and removing duplicates.
Processing: clean data for analysis and visualization.
Analysis: employ a variety of advanced analysis algorithms for temporal, topical, geospatial, and network data.
Modeling: graph generation with aging, scaling, random, and other specifications.
Visualization: visualize temporal, topical, geospatial, and network data.
Development The CNS Center and Sci2 developers encourage users to modify and develop plugins and functionality for the tool. The entire platform is open-source and the source code can be downloaded from the SVN repository. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Much–Holzmann reaction**
Much–Holzmann reaction:
The Much–Holzmann reaction was an early attempt at a serological test for the diagnosis of dementia praecox, an early-twentieth century psychiatric diagnosis superseded by schizophrenia. The originators of this test, Much and Holzmann of Eppendorf, posited that sera from patients with dementia praecox protected red blood cells from cobra venom hemolysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acedoben**
Acedoben:
Acedoben (4-acetamidobenzoic acid or N-acetyl-PABA) is a chemical compound with the molecular formula of C9H9NO3. It is the acetyl derivative of para-aminobenzoic acid (PABA).
Acedoben, as a salt with dimepranol, is a component of some pharmaceutical preparations such as inosine pranobex. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dichloramine-T**
Dichloramine-T:
Dichloramine-T or N,N-Dichloro-p-toluenesulfonamide is a chemical used as a disinfectant starting at the beginning of the 20th century. The chemical contains toluene substituted by a sulfonamide grouping, which in turn has two chlorine atoms attached to the nitrogen.
Production:
Dichloramine-T was first made by Frederick Daniel Chattaway in 1905.
Dichloramine-T can be made from para-toluenesulfonamide and bleaching powder, or chlorine.
Properties:
Dichloramine-T degrades with exposure to light or air. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vermifilter toilet**
Vermifilter toilet:
Vermifilter toilet, also known as a primary vermifilter, vermidigester toilet, tiger toilet or tiger worm toilet, is an on-site sanitation system in which human excreta are delivered from a toilet (usually by flushing) onto a medium containing a worm-based ecosystem. Faecal solids are trapped on the surface of the vermifilter where digestion takes place. Liquids typically flow through drainage media, before the effluent is infiltrated into the soil.
Description:
A vermifilter toilet contains composting worms such as Eisenia fetida that digest human faeces, thus reducing the accumulation of solids in the system and reducing the need for frequent emptying, in comparison with pit latrines. Further, worm-based digestion is virtually complete and produces vermicompost, so emptying does not involve handling of sludge or require a specialist service. This is a key benefit to users, as is the associated lack of smells. In field trials in rural India, Chemical oxygen demand (COD) and faecal indicators were reduced by 60% and 99% respectively in the effluent. A worm colony can live inside the vermifilter indefinitely as long as the correct environmental conditions are maintained. Worms need air, food (human faeces) and added (flush) water. An aerobic environment must be provided (e.g. ventilation), and the liquid effluent must be able to drain away. It is important to site the vermifilter correctly so that any risk of flooding is avoided.
Description:
Maintenance consists of occasionally removing the accumulated vermicompost: it is estimated that vermicompost removal will be required every 6–8 years, about one-half to one-third of the fill rate for an equivalent size of pit latrine with the same number of users. Emptying latrines can be expensive and often comes with smell and contamination issues: in long-term refugees camps vermifilter toilets reduce the need to replace filled pit latrines and are more cost-effective.A vermifilter toilet provides primary treatment of human excreta. Providing they are used correctly and maintenance is carried out safely they offer an affordable route towards safely managed sanitation, the new ambition for global sanitation, for all.
Examples:
Bear Valley Ventures has used the brand "Tiger Toilet" for marketing their product.
Biofilcom, and GSAP Microflush toilet secured funding from the Bill & Melinda Gates Foundation to develop vermifilter toilet technology in Africa.
Oxfam also instructs construction of brandless vermifilter toilets.
TBF Environmental Solutions Pvt Ltd markets the “Tiger Toilet” in India Biofil market a vermifilter toilet in Bangladesh and Ghana.
Biolytix in New Zealand Naturalflow New Zealand A&A Worm Farm Waste Systems in Australia Wendy Howard provides open-source plans for decentralized on-site vermifiltration septic treatment and distribution, and has been involved with promoting this technology in Portugal.
Vermifilter.com provide low cost design options for building vermifilter toilets from readily available materials
History:
Anna Edey constructed a vermicomposting flush toilet in 1995, called the Solviva Biocarbon filter system. This was later adapted by Wendy Howard. Dean Cameron in Australia developed the "dowmus" vermifilter toilet in the mid 1990s which morphed into the biolytyx system. Colin Bell from New Zealand began marketing his "wormorator" in the late 1990s, a twin-chamber vermifilter toilet.Later, attention began to focus on applications in the developing world in 2009-2012 through the Sanitation Ventures project at the London School of Hygiene and Tropical Medicine (LSHTM) funded by a grant from the Bill & Melinda Gates Foundation. This project had the goal of finding solutions to the problem of pit latrine filling: vermifilter toilets appeared to be an attractive option. Colin Bell provided the design and technical development was led by Claire Furlong in collaboration with Professor Michael Templeton of Imperial College London, and was carried out at the Centre for Advanced Technology (CAT) in Wales. By the end of the project, the team had built a usable prototype at CAT, determined key operating parameters and shown that there was consumer interest.
History:
In parallel with the LSHTM work and also with Bill and Melinda Gates Foundation funding, Biofilcom (under Kweko Annu) developed a vermifilter toilet which has been commercialised in Ghana and Bangladesh. Development of the GSAP (Ghana Sustainable Aid Project) Microflush vermifilter toilet was also funded by the Bill and Melinda Gates Foundation.
Oxfam subsequently funded the construction of field-based trials in Ethiopia (in 2013), Liberia (in 2013), while ACTED funded the development and construction of a communal (school) vermifilter toilet in Pakistan.
History:
In continuation of the earlier Sanitation Ventures work, in 2013 Bear Valley Ventures was awarded a Development Innovation Ventures grant from USAID to support field testing in three countries and three different settings. This work was carried out in partnership with Oxfam (humanitarian relief camp, Myanmar), Water for People (peri-urban, Uganda) and PriMove (rural, India). After a year long trial the conclusion was that it worked well in all three settings: the results from India have been published.After field testing Bear Valley Ventures and PriMove (under Ajeet Oak) continued to collaborate from 2014 onwards on developing and marketing the Tiger Toilet band vermifilter toilet to low income rural and peri-urban households. From 2015-2017 they worked with the Institute for Transformative Technologies (under Shashi Bulaswar) to rigorously test the product and explore paths to scale. In 2018 Bear Valley Ventures and PriMove set up TBF Environmental Solutions Pvt Ltd to commercialise the Tiger Toilet and related technologies.
History:
Oxfam (under Andy Bastable) have collaborated closely with Dr Claire Furlong to further develop applications for emergency and humanitarian camps.
In 2020 the International Worm-based Sanitation Association was formed under the leadership of Prof. Michael Templeton of Imperial College London to share, develop and promote best practice in vermifiltration for sanitation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Master of Science in Information Technology**
Master of Science in Information Technology:
A Master of Science in Information Technology (abbreviated M.Sc.IT, MScIT or MSIT) is a master's degree in the field of information technology awarded by universities in many countries or a person holding such a degree. The MSIT degree is designed for those managing information technology, especially the information systems development process. The MSIT degree is functionally equivalent to a Master of Information Systems Management, which is one of several specialized master's degree programs recognized by the Association to Advance Collegiate Schools of Business (AACSB).
Master of Science in Information Technology:
One can become software engineer and data scientist after completing M.Sc. IT.
Curriculum:
A joint committee of Association for Information Systems (AIS) and Association for Computing Machinery (ACM) members develop a model curriculum for the Master of Science in Information Systems (MSIT). The most recent version of the MSIS Model Curriculum was published in 2016.
Course and Variants:
The course of study is concentrated around the Information Systems discipline. The core courses are (typically) Systems analysis, Systems design, Data Communications, Database design, Project management and Security.
Course and Variants:
The degree typically includes coursework in both computer science and business skills, but the core curriculum might depend on the school and result in other degrees and specializations, including: Master of Science (Information Technology) M.Sc.(I.T) Master of Computer Applications (MCA) Master in Information Science (MIS) Master of Science in Information and Communication Technologies (MS-ICT) Master of Science in Information Systems Management (MISM) Master of Science in Information Technology (MSIT or MS in IT) Master of Computer Science (MCS) Master of Science in Information Systems (MSIS) Master of Science in Management of Information Technology (M.S. in MIT) Master of Information Technology (M.I.T.) Master of IT (M. IT or MIT) in Denmark Candidatus/candidata informationis technologiæ (Cand. it.) in Denmark Master of Information Science and Technology (M.I.S.T.) from The University of Tokyo and Osaka University, Japan | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indexing Service**
Indexing Service:
Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer. The IFilter plugins to extend the indexing capabilities to more file formats and protocols are compatible between the legacy Indexing Service how and the newer Windows Search Indexer.
History:
Indexing Service was a desktop search service included with Windows NT 4.0 Option Pack as well as Windows 2000 and later. The first incarnation of the indexing service was shipped in August 1996 as a content search system for Microsoft's web server software, Internet Information Services. Its origins, however, date further back to Microsoft's Cairo operating system project, with the component serving as the Content Indexer for the Object File System. Cairo was eventually shelved, but the content indexing capabilities would go on to be included as a standard component of later Windows desktop and server operating systems, starting with Windows 2000, which includes Indexing Service 3.0.In Windows Vista, the content indexer was replaced with the Windows Search indexer which was enabled by default. Indexing Service is still included with Windows Server 2008 but is not installed or running by default.Indexing Service has been deprecated in Windows 7 and Windows Server 2008 R2. It has been removed from Windows 8.
Search interfaces:
Comprehensive searching is available after initial building of the index, which can take up to hours or days, depending on the size of the specified directories, the speed of the hard drive, user activity, indexer settings and other factors. Searching using Indexing service works also on UNC paths and/or mapped network drives if the sharing server indexes appropriate directory and is aware of its sharing.
Search interfaces:
Once the indexing service has been turned on and has built its index it can be searched in three ways. The search option available from the Start menu on the Windows Taskbar will use the indexing service if it is enabled and will even accept complex queries. Queries can also be performed using either the Indexing Service Query Form in the Computer Management snap-in of Microsoft Management Console, or, alternatively, using third-party applications such as 'Aim at File' or 'Grokker Desktop'. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lapidary medicine**
Lapidary medicine:
Lapidary medicine is a pseudoscientific concept based on the belief that gemstones have healing properties. The source of the idea of lapidary medicine stems from information found in lapidaries, books giving "information about the properties and virtues of precious and semi-precious stones." These lapidaries not only provide understanding of the sale and production of items of lapidary medicine, but also provide information about common cultural practices and beliefs about gemstones.
Lapidary medicine:
The most common application of the concept was to embed precious stones within open-backed jewelry. In his book The boke of secretes of Albertus Magnus of the vertues of herbes, stones, and certayne beasts, bishop Albertus Magnus also suggests the stone be held directly to the skin, or more specifically "be wrapped in a lynnen cloth, or in a calues skyn, and borne vnder ye left arme hole[...]"While widespread belief in lapidary theory has all but disappeared by the twenty-first century, remnants of the idea can be found in the pseudoscientific concept of crystal healing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Semi-deciduous**
Semi-deciduous:
Semi-deciduous or semi-evergreen is a botanical term which refers to plants that lose their foliage for a very short period, when old leaves fall off and new foliage growth is starting. This phenomenon occurs in tropical and sub-tropical woody species, for example in Dipteryx odorata. Semi-deciduous or semi-evergreen may also describe some trees, bushes or plants that normally only lose part of their foliage in autumn/winter or during the dry season, but might lose all their leaves in a manner similar to deciduous trees in an especially cold autumn/winter or severe dry season (drought). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nazarov cyclization reaction**
Nazarov cyclization reaction:
The Nazarov cyclization reaction (often referred to as simply the Nazarov cyclization) is a chemical reaction used in organic chemistry for the synthesis of cyclopentenones. The reaction is typically divided into classical and modern variants, depending on the reagents and substrates employed. It was originally discovered by Ivan Nikolaevich Nazarov (1906–1957) in 1941 while studying the rearrangements of allyl vinyl ketones.
Nazarov cyclization reaction:
As originally described, the Nazarov cyclization involves the activation of a divinyl ketone using a stoichiometric Lewis acid or protic acid promoter. The key step of the reaction mechanism involves a cationic 4π-electrocyclic ring closure which forms the cyclopentenone product (See Mechanism below). As the reaction has been developed, variants involving substrates other than divinyl ketones and promoters other than Lewis acids have been subsumed under the name Nazarov cyclization provided that they follow a similar mechanistic pathway.
Nazarov cyclization reaction:
The success of the Nazarov cyclization as a tool in organic synthesis stems from the utility and ubiquity of cyclopentenones as both motifs in natural products (including jasmone, the aflatoxins, and a subclass of prostaglandins) and as useful synthetic intermediates for total synthesis. The reaction has been used in several total syntheses and several reviews have been published.
Mechanism:
The mechanism of the classical Nazarov cyclization reaction was first demonstrated experimentally by Charles Shoppee to be an intramolecular electrocyclization and is outlined below. Activation of the ketone by the acid catalyst generates a pentadienyl cation, which undergoes a thermally allowed 4π conrotatory electrocyclization as dictated by the Woodward-Hoffman rules. This generates an oxyallyl cation which undergoes an elimination reaction to lose a β-hydrogen. Subsequent tautomerization of the enolate produces the cyclopentenone product.
Mechanism:
As noted above, variants that deviate from this template are known; what designates a Nazarov cyclization in particular is the generation of the pentadienyl cation followed by electrocyclic ring closure to an oxyallyl cation. In order to achieve this transformation, the molecule must be in the s-trans/s-trans conformation, placing the vinyl groups in an appropriate orientation. The propensity of the system to enter this conformation dramatically influences reaction rate, with α-substituted substrates having an increased population of the requisite conformer due to allylic strain. Coordination of an electron donating α-substituent by the catalyst can likewise increase the reaction rate by enforcing this conformation.
Mechanism:
Similarly, β-substitution directed inward restricts the s-trans conformation so severely that E-Z isomerization has been shown to occur in advance of cyclization on a wide range of substrates, yielding the trans cyclopentenone regardless of initial configuration. In this way, the Nazarov cyclization is a rare example of a stereoselective pericyclic reaction, whereas most electrocyclizations are stereospecific. The example below uses triethylsilane to trap the oxyallyl cation so that no elimination occurs. (See Interrupted cyclizations below) Along this same vein, allenyl vinyl ketones of the type studied extensively by Marcus Tius of the University of Hawaii show dramatic rate acceleration due to the removal of β-hydrogens, obviating a large amount of steric strain in the s-cis conformer.
Classical Nazarov cyclizations:
Though cyclizations following the general template above had been observed prior to Nazarov's involvement, it was his study of the rearrangements of allyl vinyl ketones that marked the first major examination of this process. Nazarov correctly reasoned that the allylic olefin isomerized in situ to form a divinyl ketone before ring closure to the cyclopentenone product. The reaction shown below involves an alkyne oxymercuration reaction to generate the requisite ketone.
Classical Nazarov cyclizations:
Research involving the reaction was relatively quiet in subsequent years, until in the mid-1980s when several syntheses employing the Nazarov cyclization were published. Shown below are key steps in the syntheses of Trichodiene and Nor-Sterepolide, the latter of which is thought to proceed via an unusual alkyne-allene isomerization that generates the divinyl ketone.
Shortcomings The classical version of the Nazarov cyclization suffers from several drawbacks which modern variants attempt to circumvent. The first two are not evident from the mechanism alone, but are indicative of the barriers to cyclization; the last three stem from selectivity issues relating to elimination and protonation of the intermediate.
Strong Lewis or protic acids are typically required for the reaction (e.g. TiCl4, BF3, MeSO3H). These promoters are not compatible with sensitive functional groups, limiting the substrate scope.
Despite the mechanistic possibility for catalysis, multiple equivalents of the promoter are often required in order to effect the reaction. This limits the atom economy of the reaction.
The elimination step is not regioselective; if multiple β-hydrogens are available for elimination, various products are often observed as mixtures. This is highly undesirable from an efficiency standpoint as arduous separation is typically required.
Elimination destroys a potential stereocenter, decreasing the potential usefulness of the reaction.
Protonation of the enolate is sometimes not stereoselective, meaning that products can be formed as mixtures of epimers.
Modern variants:
The shortcomings noted above limit the usefulness of the Nazarov cyclization reaction in its canonical form. However, modifications to the reaction focused on remedying its issues continue to be an active area of academic research. In particular, the research has focused on a few key areas: rendering the reaction catalytic in the promoter, effecting the reaction with more mild promoters to improve functional group tolerance, directing the regioselectivity of the elimination step, and improving the overall stereoselectivity. These have been successful to varying degrees.
Modern variants:
Additionally, modifications focused on altering the progress of the reaction, either by generating the pentadienyl cation in an unorthodox fashion or by having the oxyallyl cation "intercepted" in various ways. Furthermore, enantioselective variants of various kinds have been developed. The sheer volume of literature on the subject prevents a comprehensive examination of this field; key examples are given below.
Modern variants:
Silicon-directed cyclization The earliest efforts to improve the selectivity of the Nazarov cyclization took advantage of the β-silicon effect in order to direct the regioselectivity of the elimination step. This chemistry was developed most extensively by Professor Scott Denmark of the University of Illinois, Urbana-Champaign in the mid-1980s and utilizes stoichiometric amounts of iron trichloride to promote the reaction. With bicyclic products, the cis isomer was selected for to varying degrees.
Modern variants:
The silicon-directed Nazarov cyclization reaction was subsequently employed in the synthesis of the natural product Silphinene, shown below. The cyclization takes place before elimination of the benzyl alcohol moiety, so that the resulting stereochemistry of the newly formed ring arises from approach of the silyl alkene anti to the ether.
Modern variants:
Polarization Drawing on the substituent effects compiled over various trials of the reaction, Professor Alison Frontier of the University of Rochester developed a paradigm for "polarized" Nazarov cyclizations in which electron donating and electron withdrawing groups are used to improve the overall selectivity of the reaction. Creation of an effective vinyl nucleophile and vinyl electrophile in the substrate allows catalytic activation with copper triflate and regioselective elimination. In addition, the electron withdrawing group increases the acidity of the α-proton, allowing selective formation of the trans-α-epimer via equilibration.
Modern variants:
It is often possible to achieve catalytic activation using a donating or withdrawing group alone, although the efficiency of the reaction (yield, reaction time, etc.) is typically lower.
Modern variants:
Alternative cation generation By extension, any pentadienyl cation regardless of its origin is capable of undergoing a Nazarov cyclization. There have been a large number of examples published where the requisite cation is arrived at by a variety of rearrangements. One such example involves the silver catalyzed cationic ring opening of allylic dichloro cylopropanes. The silver salt facilitates loss of chloride via precipitation of insoluble silver chloride.
Modern variants:
In the total synthesis of rocaglamide, epoxidation of a vinyl alkoxyallenyl stannane likewise generates a pentadienyl cation via ring opening of the resultant epoxide.
Modern variants:
Interrupted cyclization Once the cyclization has occurred, an oxyallyl cation is formed. As discussed extensively above, the typical course for this intermediate is elimination followed by enolate tautomerization. However, these two steps can be interrupted by various nucleophiles and electrophiles, respectively. Oxyallyl cation trapping has been developed extensively by Fredrick G. West of the University of Alberta and his review covers the field. The oxyallyl cation can be trapped with heteroatom and carbon nucleophiles and can also undergo cationic cycloadditions with various tethered partners. Shown below is a cascade reaction in which successive cation trapping generates a pentacyclic core in one step with complete diastereoselectivity.
Modern variants:
Enolate trapping with various electrophiles is decidedly less common. In one study, the Nazarov cyclization is paired with a Michael reaction using an iridium catalyst to initiate nucleophilic conjugate addition of the enolate to β-nitrostyrene. In this tandem reaction the iridium catalyst is required for both conversions: it acts as the Lewis acid in the Nazarov cyclization and in the next step the nitro group of nitrostyrene first coordinates to iridium in a ligand exchange with the carbonyl ester oxygen atom before the actual Michael addition takes place to the opposite face of the R-group.
Modern variants:
Enantioselective variants The development of an enantioselective Nazarov cyclization is a desirable addition to the repertoire of Nazarov cyclization reactions. To that end, several variations have been developed utilizing chiral auxiliaries and chiral catalysts. Diastereoselective cyclizations are also known, in which extant stereocenters direct the cyclization. Almost all of the attempts are based on the idea of torquoselectivity; selecting one direction for the vinyl groups to "rotate" in turn sets the stereochemistry as shown below.
Modern variants:
Silicon-directed Nazarov cyclizations can exhibit induced diastereoselectivity in this way. In the example below, the silyl-group acts to direct the cyclization by preventing the distant alkene from rotating "towards" it via unfavorable steric interaction. In this way the silicon acts as a traceless auxiliary. (The starting material is not enantiopure but the retention of enantiomeric excess suggests that the auxiliary directs the cyclization.) Tius's allenyl substrates can exhibit axial to tetrahedral chirality transfer if enantiopure allenes are used. The example below generates a chiral diosphenpol in 64% yield and 95% enantiomeric excess.
Modern variants:
Tius has additionally developed a camphor-based auxiliary for achiral allenes that was employed in the first asymmetric synthesis of roseophilin. The key step employs an unusual mixture of hexafluoro-2-propanol and trifluoroethanol as solvent.
The first chiral Lewis acid promoted asymmetric Nazarov cyclization was reported by Varinder Aggarwal and utilized copper (II) bisoxazoline ligand complexes with up to 98% ee. The enantiomeric excess was unaffected by use of 50 mol% of the copper complex but the yield was significantly decreased.
Related Reactions:
Extensions of the Nazarov cyclization are generally also subsumed under the same name. For example, an α-β, γ-δ unsaturated ketone can undergo a similar cationic conrotatory cyclization that is typically referred to as an iso-Nazarov cyclization reaction. Other such extensions have been given similar names, including homo-Nazarov cyclizations and vinylogous Nazarov cyclizations.
Retro-Nazarov reaction Because they overstabilize the pentadienyl cation, β-electron donating substituents often severely impede Nazarov cyclization. Building from this, several electrocyclic ring openings of β-alkoxy cyclopentanes have been reported. These are typically referred to as retro-Nazarov cyclization reactions.
Related Reactions:
Imino-Nazarov reaction Nitrogen analogues of the Nazarov cyclization reaction (known as imino-Nazarov cyclization reactions) have few instances; there is one example of a generalized imino-Nazarov cyclization reported (shown below), and several iso-imino-Nazarov reactions in the literature. Even these tend to suffer from poor stereoselectivity, poor yields, or narrow scope. The difficulty stems from the relative over-stabilization of the pentadienyl cation by electron donation, impeding cyclization. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Reactor Experiment for Neutrino Oscillation**
Reactor Experiment for Neutrino Oscillation:
The Reactor Experiment for Neutrino Oscillation (RENO) is a short baseline reactor neutrino oscillation experiment in South Korea. The experiment was designed to either measure or set a limit on the neutrino mixing matrix parameter θ13, a parameter responsible for oscillations of electron neutrinos into other neutrino flavours. RENO has two identical detectors, placed at distances of 294 m and 1383 m, that observe electron antineutrinos produced by six reactors at the Hanbit Nuclear Power Plant (the old name: the Yeonggwang Nuclear Power Plant) in Korea.
Reactor Experiment for Neutrino Oscillation:
Each detector consists of 16.5 t of gadolinium-doped liquid scintillator (LAB), surrounded by an additional 450 tons of buffer, veto, and shielding liquids.: 6 On 3 April 2012, with some corrections on 8 April, the RENO collaboration announced a 4.9σ observation of θ13 ≠ 0, with sin 13 0.113 0.013 0.019 (syst.) This measurement confirmed a similar result announced by the Daya Bay Experiment three weeks before and is consistent with earlier, but less significant results by T2K, MINOS and Double Chooz.
Reactor Experiment for Neutrino Oscillation:
RENO released updated results in December 2013, confirming θ13 ≠ 0 with a significance of 6.3σ: sin 13 0.100 0.010 0.015 (syst.) In 2014, RENO announced the observation of an unexpectedly large number of neutrinos with an energy of 5±1 MeV.: 14–15 This has since been confirmed by the Daya Bay and Double Chooz experiments,: 14–17 and the cause remains an outstanding puzzle.
Reactor Experiment for Neutrino Oscillation:
Expansion plans, referred to as RENO-50, will add a third medium-baseline detector at a distance of 47 km. This distance is better for observing neutrino oscillations, but requires a much larger detector due to the smaller neutrino flux. The location, near Dongshin University, has a 450 m high mountain (Mt. Guemseong), which will provide 900 m.w.e. shielding for the detector. If funded, this will contain 18000 t of scintillator,: 31 surrounded by 15000 photomultiplier tubes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sony Handheld Engine**
Sony Handheld Engine:
The Sony Handheld Engine (or Sony HHE) was an ARM-based Application Processor, or SoC announced by Sony in July 2003. This mobile processor was specifically developed for the Sony CLIE series of PDAs, and was cutting-edge for the time, with a heavy focus on power-efficiency, and featuring numerous state-of-the-art features integrated into a single Application Processor IC.
The Sony Handheld Engine processor (model CXD2230GA) was first used on the Sony CLIÉ UX Series. This processor was also used on some subsequent CLIÉ models, specifically the TH55 and on the VZ90.
Sony Handheld Engine:
This processor was advertised as having additional mobile capabilities, such as dynamic speed scaling for power efficiency, hardware acceleration for MPEG video playback, a DSP for audio processing, a 2D graphics accelerator, as well as integrated camera, MemoryStick, and LCD interfaces. This highly integrated combination of capabilities were not found in other mobile CPU's until several years later. The processor offered DVFM (Dynamic Voltage and Frequency Management), combining both dynamic clock speed and dynamic voltage scaling power saving features together, and this was advertised as being the world's first commercial implementation of such a feature. This processor also featured a 123 MHz ARM926 core with 64 Mbit of integrated eDRAM, and was produced on a 180 nm lithography process by Sony Computer Entertainment in their Nagasaki Chip Fab.
Devices featuring the Sony HHE:
Sony CLIÉ UX Series Sony CLIÉ PEG-VZ90 Sony CLIÉ PEG-TH55 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sagrada (board game)**
Sagrada (board game):
Sagrada is a dice-drafting board game published by Floodgate Games.
Gameplay:
The object of the game is for each player to construct a stained-glass window using dice on a private board having 20 spaces. The available double-sided window boards have a complexity rating ranging from 3 to 6, which also represents the number of favour tokens with which the player begins the game. A game lasts ten rounds.Each turn, players choose from a pool of coloured dice available that turn in a snake draft. These are then placed on a player's private board representing a stained-glass window based on the restrictions specified on each slot of the board; for example, a slot may specify a number such as 2 or a colour such as red. Placement of the dice must also satisfy global placement rules. Players may also pay a fee to obtain rule-altering tool cards.The first die must be placed on one of the edges of the window, and subsequently placed dice must be placed in a space adjacent to already-placed dice, either orthogonally or diagonally. Additionally, no die may be placed orthogonally adjacent to one having the same colour or the same number.There are three global scoring cards used by all players, as well as a private scoring card for each player. A player scores points based on the three global scoring cards and their private scoring card. Points are deducted for each open space in the player's window, and awarded for each favour token possessed.A game can take between 30 and 60 minutes, depending on the number of players.
Expansion:
An expansion set called Passion was released in 2019. A digital version of the game published as an app by Dire Wolf Digital was released in 2020.
Reception:
The game was runner-up for the categories "family game" and "artwork and presentation" of the 2017 Golden Geek Awards. It is described as a fast-paced game that is easy to learn and quick to play, and suitable for games with people who typically do not play board games.A review by Owen Duffy for The Guardian stated that the game "benefits from some real variety", but that there is "almost no interaction between players".Some members of BoardGameGeek, particularly those with colour blindness, have reported problems distinguishing between the dice, as colour is the only feature differentiating them. The app by Dire Wolf Digital provides a colour-blind mode. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Magisk (software)**
Magisk (software):
Magisk is free and open-source software for the rooting of Android devices, developed by John Wu. Magisk supports devices running Android 6.0+.
Overview:
Magisk is a free and open-source software that enables users to gain root access to their Android devices. With Magisk, users can install various modifications and customizations, making it a popular choice for Android enthusiasts. Additionally, Magisk comes with a built-in app called Magisk, which allows users to manage root permissions and install various modules.Magisk is systemless approach and modular design, it offers a safe and easy way to root a device and add new features and functionality.
History:
Magisk started out as a small project created by John Wu, it has now grown to more than 252 contributors. In version 21, support for Android 11 was added. In version 22, support for the Samsung Galaxy S21 was added. In version 23, support for Android 5 and earlier was removed.The original developer John Wu started working for the Android security team in 2021. In 2021, the MagiskHide feature of Magisk was discontinued by the original developer John Wu. Arnoud Wokke from Tweakers argued that there is a high chance this feature will be developed by a third party developer as a Magisk module. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Division algebra**
Division algebra:
In the field of mathematics called abstract algebra, a division algebra is, roughly speaking, an algebra over a field in which division, except by zero, is always possible.
Definitions:
Formally, we start with a non-zero algebra D over a field. We call D a division algebra if for any element a in D and any non-zero element b in D there exists precisely one element x in D with a = bx and precisely one element y in D such that a = yb.
For associative algebras, the definition can be simplified as follows: a non-zero associative algebra over a field is a division algebra if and only if it has a multiplicative identity element 1 and every non-zero element a has a multiplicative inverse (i.e. an element x with ax = xa = 1).
Associative division algebras:
The best-known examples of associative division algebras are the finite-dimensional real ones (that is, algebras over the field R of real numbers, which are finite-dimensional as a vector space over the reals). The Frobenius theorem states that up to isomorphism there are three such algebras: the reals themselves (dimension 1), the field of complex numbers (dimension 2), and the quaternions (dimension 4).
Associative division algebras:
Wedderburn's little theorem states that if D is a finite division algebra, then D is a finite field.Over an algebraically closed field K (for example the complex numbers C), there are no finite-dimensional associative division algebras, except K itself.Associative division algebras have no nonzero zero divisors. A finite-dimensional unital associative algebra (over any field) is a division algebra if and only if it has no nonzero zero divisors.
Associative division algebras:
Whenever A is an associative unital algebra over the field F and S is a simple module over A, then the endomorphism ring of S is a division algebra over F; every associative division algebra over F arises in this fashion.
Associative division algebras:
The center of an associative division algebra D over the field K is a field containing K. The dimension of such an algebra over its center, if finite, is a perfect square: it is equal to the square of the dimension of a maximal subfield of D over the center. Given a field F, the Brauer equivalence classes of simple (contains only trivial two-sided ideals) associative division algebras whose center is F and which are finite-dimensional over F can be turned into a group, the Brauer group of the field F.
Associative division algebras:
One way to construct finite-dimensional associative division algebras over arbitrary fields is given by the quaternion algebras (see also quaternions).
For infinite-dimensional associative division algebras, the most important cases are those where the space has some reasonable topology. See for example normed division algebras and Banach algebras.
Not necessarily associative division algebras:
If the division algebra is not assumed to be associative, usually some weaker condition (such as alternativity or power associativity) is imposed instead. See algebra over a field for a list of such conditions.
Over the reals there are (up to isomorphism) only two unitary commutative finite-dimensional division algebras: the reals themselves, and the complex numbers. These are of course both associative. For a non-associative example, consider the complex numbers with multiplication defined by taking the complex conjugate of the usual multiplication: a∗b=ab¯.
This is a commutative, non-associative division algebra of dimension 2 over the reals, and has no unit element. There are infinitely many other non-isomorphic commutative, non-associative, finite-dimensional real divisional algebras, but they all have dimension 2.
In fact, every finite-dimensional real commutative division algebra is either 1- or 2-dimensional. This is known as Hopf's theorem, and was proved in 1940. The proof uses methods from topology. Although a later proof was found using algebraic geometry, no direct algebraic proof is known. The fundamental theorem of algebra is a corollary of Hopf's theorem.
Dropping the requirement of commutativity, Hopf generalized his result: Any finite-dimensional real division algebra must have dimension a power of 2.
Not necessarily associative division algebras:
Later work showed that in fact, any finite-dimensional real division algebra must be of dimension 1, 2, 4, or 8. This was independently proved by Michel Kervaire and John Milnor in 1958, again using techniques of algebraic topology, in particular K-theory. Adolf Hurwitz had shown in 1898 that the identity sum of squares held only for dimensions 1, 2, 4 and 8. (See Hurwitz's theorem.) The challenge of constructing a division algebra of three dimensions was tackled by several early mathematicians. Kenneth O. May surveyed these attempts in 1966.Any real finite-dimensional division algebra over the reals must be isomorphic to R or C if unitary and commutative (equivalently: associative and commutative) isomorphic to the quaternions if noncommutative but associative isomorphic to the octonions if non-associative but alternative.The following is known about the dimension of a finite-dimensional division algebra A over a field K: dim A = 1 if K is algebraically closed, dim A = 1, 2, 4 or 8 if K is real closed, and If K is neither algebraically nor real closed, then there are infinitely many dimensions in which there exist division algebras over K. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sensemaking (information science)**
Sensemaking (information science):
While sensemaking has been studied by other disciplines under other names for centuries, in information science and computer science the term "sensemaking" has primarily marked two distinct but related topics. Sensemaking was introduced as a methodology by Brenda Dervin in the 1980s and to human–computer interaction by PARC researchers Daniel Russell, Mark Stefik, Peter Pirolli and Stuart Card in 1993.
Sensemaking (information science):
In information science, the term is often written as "sense-making". In both cases, the concept has been used to bring together insights drawn from philosophy, sociology, and cognitive science (especially social psychology). Sensemaking research is therefore often presented as an interdisciplinary research programme.
As a process:
Sensemaking can be described as a process of developing sophisticated representation and organizing information to serve a task, for example, decision-making and problem-solving (Russell et al., 1993). Gary A. Klein and colleagues (Klein et al. 2006b) conceptualize sensemaking as a set of processes that is initiated when an individual or organization recognizes the inadequacy of their current understanding of events.
As a process:
Sensemaking is an active two-way process of fitting data into a frame (mental model) and fitting a frame around the data. Neither data nor frame comes first; data evoke frames and frames select and connect data. When there is no adequate fit, the data may be reconsidered or an existing frame may be revised. This description resembles the recognition-metacognition model (Cohen et al., 1996), which describes the metacognitive processes that are used by individuals to build, verify, and modify working models (or "stories") in situational awareness to account for an unrecognised situation. Such notions also echo the processes of assimilation and accommodation in Jean Piaget's theory of cognitive development (e.g., Piaget, 1972, 1977).
As methodology:
Brenda Dervin (Dervin, 1983, 1992, 1996) has investigated individual sensemaking, developing theories about the "cognitive gap" that individuals experience when attempting to make sense of observed data. Because much of this applied psychological research is grounded within the context of systems engineering and human factors, it aims to answer the need for concepts and performance to be measurable and for theories to be testable. Accordingly, sensemaking and situational awareness are viewed as working concepts that enable researchers to investigate and improve the interaction between people and information technology. This perspective emphasizes that humans play a significant role in adapting and responding to unexpected or unknown situations, as well as recognized situations. Dervin's work has largely focused on developing philosophical guidance for method, including methods of substantive theorizing and conducting research (Naumer, C. et al., 2008).
In human–computer interaction:
After a seminal paper on sensemaking in the human–computer interaction (HCI) field was published in 1993 (Russell et al., 1993), there was a great deal of activity around the understanding of how to design interactive systems for sensemaking, and workshops on sensemaking were held at prominent HCI conferences (e.g., Russell et al., 2009). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Olericulture**
Olericulture:
Olericulture is the science of vegetable growing, dealing with the culture of non-woody (herbaceous) plants for food.
Olericulture:
Olericulture is the production of plants for use of the edible parts. Vegetable crops can be classified into nine major categories: Potherbs and greens – spinach and collards Salad crops – lettuce, celery Cole crops – cabbage and cauliflower Root crops (tubers) – potatoes, beets, carrots, radishes Bulb crops – onions, leeks Legumes – beans, peas Cucurbits – melons, squash, cucumber Solanaceous crops – tomatoes, peppers, potatoes Sweet cornOlericulture deals with the production, storage, processing and marketing of vegetables. It encompasses crop establishment, including cultivar selection, seedbed preparation and establishment of vegetable crops by seed and transplants. It also includes maintenance and care of vegetable crops as well commercial and non-traditional vegetable crop production including organic gardening and organic farming; sustainable agriculture and horticulture; hydroponics; and biotechnology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wrap dress**
Wrap dress:
A wrap dress is generic term for a dress with a front closure formed by wrapping one side across the other, and is fastened at the side or tied at the back. This forms a V-shaped neckline. A faux wrap dress resembles this design, except that it comes already fastened together with no opening in front, but instead is slipped on over the head. A wrap top is a top cut and constructed in the same way as a wrap dress, but without a skirt. The design of wrap-style closure in European garments was the results of the heavy influences of Orientalism which was popular in the 19th century.
History:
Wrap-over neckline which closes to the right side originated in China and can be traced back to the Shang dynasty (1600 to 1046 B.C) before spreading to other countries (such as Korea and Japan) while wrap-over neckline which closes to the left were basic styles of garments which were widely used in Central Asia and East Asia, as well as Europe, from West Asia.
History:
East Asia China The traditional clothing of the Han Chinese, Hanfu, are traditionally loose, wrap-style garments; these include wrap-style robes, such as the ancient shenyi (which sews a top and a skirt to form a dress), the zhiduo, the daopao, and the jiaoling pao (a one-piece dress), etc., as well as wrap-style upper garments, such as the chang'ao and ru, etc., and as short-sleeved or sleeveless wrap-style upper garment such as banbi and dahu, etc. The Chinese wrap-over neckline typically closes on the right side like the alphabetic letter《y》and is referred as jiaoling youren (Chinese: 交领右衽; lit. 'intersecting collar right lapel') but can occasionally close on the left side under some circumstances in a style known as jiaoling zuoren (Chinese: 交领左衽; lit. 'intersecting collar left lapel').
History:
Japan The jiaoling youren was adopted by the Japanese in 718 AD through the Yoro Code which stipulated that all robes had to be closed from the left to the right in a typical Chinese way.
Wrap-style garments which were tied with sash have very ancient origins in China and were later introduced in Japan influencing the design of the Kimono.: 122 The kimono originated from the Chinese jiaoling pao, which gained popularity in the 8th century Japanese court.
History:
Orientalism, Europe, and America European clothing with wrap-style closure were heavily influenced by the popularity of Orientalism in the 19th century. In the 20th century, Chinoiserie in fashion gained popularity and impacted many fashion designers of the time, including fashion designed based in the United States. According to the Ladies’ Home Journal of June 1913, volume 30, issue 6:Interest in the political and civic activities of the new China, which is more or less world-wide at this time, led the designers of this page [p.26] and the succeeding one [p.27] to look to that country for inspiration for clothes that would be unique and new and yet fit in with present-day modes and the needs and environments of American women [...]Chinoiserie continued to be popular in the 1920s and was a major influence in the dress feature and fashion design of this period; simultaneously, Japonisme also had a profound impact by influencing new forms of clothing designs of this period; for example, the use of wrap top and obi-like sash as an influence of the Japanese Kimono.During the Great Depression, house dresses called "Hooverettes" were popular which employed a wrap design. Wrap dresses were designed by Elsa Schiaparelli in the 1930s and by Claire McCardell in the 1940s, whose original 'popover' design became the basis for a variety of wrap-around dresses, which was made out of denim.: 105 Fashion designer Charles James also designed a wrap dress.In the early 1970s, Orientalism re-emerged as the West officially expressed eagerness towards the Far East. Oriental fashion, thus, re-surfaced in American fashion wear; American designers also showed these Oriental influences in their creation designs.: 112 The wrap-around lounging wear, which was inspired by the native Chinese dress, gained popularity among women during this period.: 112 Diane von Fürstenberg's wrap dress Although it is often claimed that Diane von Fürstenberg 'invented' what is known as the wrap dress in 1972/73, Richard Martin, a former curator of the Costume Institute at the Metropolitan Museum of Art, noted that the form of Fürstenberg's design had already been "deeply embedded into the American designer sportswear tradition," with her choice of elastic, synthetic fabrics distinguishing her work from earlier wrap dresses. Her design is actually a two-pieces dress where a wrap top is sewn to a skirt,: 105 similar to the making of the Chinese shenyi.
History:
The Fürstenberg interpretation of the wrap dress, which was consistently knee-length, in a clinging jersey, with long sleeves, was so popular and so distinctive that the style has generally become associated with her. She has stated that her divorce inspired the design, and also suggested it was created in the spirit of enabling women to enjoy sexual freedom. The wrap dress that she designed in 1974 was a design re-interpretation of the Kimono.: 105 Wrap dresses achieved their peak of popularity in the mid to late 1970s, and the design has been credited with becoming a symbol of women's liberation in the 1970s. They experienced renewed popularity beginning in the late 1990s, particularly after von Fürstenberg reintroduced her wrap dress in 1997; she, among others, has continued to design wrap dresses since then. The wrap dress's popularity and its quick disrobing, and perceived feminist significance have remained current into the mid-2010s. In 2004 a book dedicated entirely to Fürstenberg's wrap dresses was published. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Penny Moore (virologist)**
Penny Moore (virologist):
Penelope Moore is a virologist and DST/NRF South African Research Chair of Virus-Host Dynamics at the University of the Witwatersrand in Johannesburg, South Africa and Senior Scientist at the National Institute for Communicable Diseases.
Education and work:
Moore received her Master of Science degree in Microbiology from the University of Witwatersrand. In 2003, she completed her PhD in Virology at the University of London.She was one of the first scientists to bring the Omicron variant of COVID-19 to public attention. She remarked of the pace of the preliminary research that “We’re flying at warp speed.Her current work focuses on the HIV neutralizing antibodies and their interactions with HIV. These antibodies would form the basis for an HIV vaccine.
Recognition:
In 2009, Moore received a Sydney Brenner Fellowship from the Academy of Science of South Africa (ASSAf) and was awarded a Friedel Sellschop Award by the University of the Witwatersrand.In 2015 while at the Centre for HIV and STI at the NICD and the Wits School of Pathology Moore was awarded the Chair in Virus-Host Dynamics for Public Health at Wits.In 2018 Moore was awarded a Silver Medal by the South African Medical Research Council for "important scientific contributions made within 10 years of having been awarded [her] PhD."Moore is a founding member of the South African Young Academy of Science, a full Member of the American Society for Virology and a Member of the ASSAf. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pseudoexfoliation syndrome**
Pseudoexfoliation syndrome:
Pseudoexfoliation syndrome, often abbreviated as PEX and sometimes as PES or PXS, is an aging-related systemic disease manifesting itself primarily in the eyes which is characterized by the accumulation of microscopic granular amyloid-like protein fibers. Its cause is unknown, although there is speculation that there may be a genetic basis. It is more prevalent in women than men, and in persons past the age of seventy. Its prevalence in different human populations varies; for example, it is prevalent in Scandinavia. The buildup of protein clumps can block normal drainage of the eye fluid called the aqueous humor and can cause, in turn, a buildup of pressure leading to glaucoma and loss of vision (pseudoexfoliation glaucoma, exfoliation glaucoma). As worldwide populations become older because of shifts in demography, PEX may become a matter of greater concern.
Signs and symptoms:
Patients may have no specific symptoms. In some cases, patients may complain of lessened visual acuity or changes in their perceived visual field, and such changes may be secondary to or different from symptoms normally associated with cataracts or glaucoma.PEX is characterized by tiny microscopic white or grey granular flakes which are clumps of proteins within the eye which look somewhat like dandruff when seen through a microscope and which are released by cells. The abnormal flakes, sometimes compared to amyloid-like material, are visible during an examination of the lens of an eye by an ophthalmologist or optometrist, which is the usual diagnosis. The white fluffy material is seen in many tissues both ocular and extraocular, such as in the anterior chamber structures, trabecular meshwork, central disc, zonular fibres, anterior hyaloid membrane, pupillary and anterior iris, trabecula, and occasionally the cornea. The flakes are widespread. One report suggested that the granular flakes were from abnormalities of the basement membrane in epithelial cells, and that they were distributed widely throughout the body and not just within structures of the eye. There is some research suggesting that the material may be produced in the iris pigment epithelium, ciliary epithelium, or the peripheral anterior lens epithelium. A similar report suggests that the proteins come from the lens, iris, and other parts of the eye. A report in 2010 found indications of an abnormal ocular surface in PEX patients, discovered by an eye staining method known as rose bengal.
Signs and symptoms:
PEX can become problematic when the flakes become enmeshed in a "spongy area" known as the trabecular meshwork and block its normal functioning, and may interact with degenerative changes in the Schlemm's canal and the juxtacanalicular area. The blockage leads to greater-than-normal elevated intraocular pressure which, in turn, can damage the optic nerve. The eye produces a clear fluid called the aqueous humor which subsequently drains such that there is a constant level of safe pressure within the eye, but glaucoma can result if this normal outflow of fluid is blocked. Glaucoma is an umbrella term indicating ailments which damage the neural cable from the eye to the brain called the optic nerve, and which can lead to a loss of vision. In most cases of glaucoma, typically called primary open-angle glaucoma, the outflow does not happen normally but doctors can not see what is causing the blockage; with PEX, however, the flakes are believed to be a cause of the blockage. PEX flakes by themselves do not directly cause glaucoma, but can cause glaucoma indirectly by blocking the outflow of aqueous humor, which leads to higher intraocular pressure, and this can cause glaucoma. PEX has been known to cause a weakening of structures within the eye which help hold the eye's lens in place, called lens zonules.
Causes:
The cause of pseudoexfoliation glaucoma is generally unknown.PEX is generally believed to be a systemic disorder, possibly of the basement membrane of the eye. Researchers have noticed deposits of PEX material in various parts of the body, including in the skin, heart, lungs, liver, kidneys, and elsewhere. Nevertheless, what is puzzling is that PEX tends to happen in only one eye first, which scientists call unilaterality, and in some cases, gradually affects the other eye, which is termed bilaterality. According to this reasoning, if PEX were a systemic disorder, then both eyes should be affected at the same time, but they are not. There are contrasting reports about the extent and speed with which PEX moves from one eye to both eyes. According to one report, PEX develops in the second eye in 40% of cases. A contrasting report was that PEX can be found in both eyes in almost all situations if an electron microscope is used to examine the second eye, or if a biopsy of the conjunctiva was done, but that the extent of PEX is the second eye was much less than the first one. A different report suggested that two thirds of PEX patients had flakes in only one eye. In one long-term study, patients with PEX in only one eye were studied, and it was found that over time, 13% progressed to having both eyes affected by PEX. Scientists believe that elevated levels of plasma homocysteine are a risk factor for cardiovascular disease, and two studies have found higher levels of plasma homocysteine in PEX patients, or elevated homocysteine concentrations in tear fluids produced by the eye.There is speculation that PEX may be caused by oxidative damage and the presence of free radicals, although the exact nature of how this might happen is still under study. Studies of PEX patients have found a decrease in the concentrations of ascorbic acid, increase in concentrations of malondialdehyde, and an increase in concentrations of 8-iso-prostaglandinF2a.There is speculation that genetics may play a role in PEX. A predisposition to develop PEX later in life may be an inherited characteristic, according to one account. One report suggested the genetic component was "strong". One study performed in Iceland and Sweden has associated PEX with polymorphisms in gene LOXL1. A report suggested that a specific gene named LOXL1 which was a member of the family of enzymes which play a role in the linking of collagen and elastin inside cells. LOXL1 was responsible for "all the heritability" of PEX, according to one source. Two distinct mutations in which a single nucleotide was changed, or called a single nucleotide polymorphism or SNP, was discovered in Scandinavian populations and confirmed in other populations, and may be involved with the onset of PEX.
Causes:
The gene is called LOXL1 ... Because pseudoexfoliation syndrome is associated with abnormalities of the extracellular matrix and the basement membrane, this gene could reasonably play a role in the pathophysiology of the condition.
Researchers are investigating whether factors such as exposure to ultraviolet light, living in northern latitudes, or altitude influence the onset of PEX. One report suggested that climate was not a factor related to PEX. Another report suggested a possible link to sunlight as well as a possible autoimmune response, or possibly a virus.
Diagnosis:
PEX is usually diagnosed by an eye doctor who examines the eye using a microscope. The method is termed slit lamp examination and it is done with an "85% sensitivity rate and a 100% specificity rate." Since the symptom of increased pressure within the eye is generally painless until the condition becomes rather advanced, it is possible for people affected by glaucoma to be in danger yet not be aware of it. As a result, it is recommended that persons have regular eye examinations to have their levels of intraocular pressure measured, so that treatments can be prescribed before there is any serious damage to the optic nerve and subsequent loss of vision.
Treatment:
While PEX itself is untreatable as of 2011, it is possible for doctors to minimize the damage to vision and to the optic nerves by the same medical techniques used to prevent glaucoma.
Treatment:
Eyedrops. This is usually the first treatment method. Eyedrops can help reduce intraocular pressure within the eye. The medications within the eyedrops can include beta blockers (such as levobunolol or timolol) which slow the production of the aqueous humor. And other medications can increase its outflow, such as prostaglandin analogues (e.g. latanoprost). And these medicines can be used in various combinations. In most cases of glaucoma, eyedrops alone will suffice to solve the problem.
Treatment:
Laser surgery. A further treatment is a type of laser therapy known as trabeculoplasty in which a high-energy laser beam is pointed at the trabecular meshwork to cause it to "remodel and open" and improve the outflows of the aqueous humor. These can be done as an outpatient procedure and take less than twenty minutes. One report suggests this procedure is usually effective.Eye surgery. Surgery is the treatment method of last resort if the other methods have not worked. It is usually effective at preventing glaucoma. Eye surgery on PEX patients can be subject to medical complications if the fibers which hold the lens have become weakened because of a buildup from the flakes; if the lens-holding fibers have weakened, then the lens may become loose, and complications from eye surgery may result. In such cases, it is recommended that surgeons act quickly to repair the phacodonesis before the lenses have dropped. A surgeon cuts an opening in the white portion of the eye known as the sclera, and removes a tiny area of the trabecular meshwork which enables the aqueous humor to discharge. This lowers the internal pressure within the eye and lessens the chance of future damage to the optic nerve. Cases with pseudophacodonesis and dislocated IOL have been increasing in number, according to one report. In cataract surgery, complications resulting from PEX include capsular rupture and vitreous loss.
Treatment:
Drug therapy. There are speculations that if genetics plays a role in PEX, and if the specific genes involved can be identified, that possibly drugs can be developed to counteract these mutations or their effects. But such drugs have not been developed as of 2011.Patients should continue to have regular eye examinations so that physicians can monitor pressure levels and check whether medicines are working.
Epidemiology:
Scientists are studying different populations and relationships to try to learn more about the disease. They have found associations with different groups but it is not yet clear what the underlying factors are and how they affect different peoples around the world.
Epidemiology:
Glaucoma patients. While PEX and glaucoma are believed to be related, there are cases of persons with PEX without glaucoma, and persons with glaucoma without PEX. Generally, a person with PEX is considered as having a risk of developing glaucoma, and vice versa. One study suggested that the PEX was present in 12% of glaucoma patients. Another found that PEX was present in 6% of an "open-angle glaucoma" group. Pseudoexfoliation syndrome is considered to be the most common of identifiable causes of glaucoma. If PEX is diagnosed without glaucoma, there is a high risk of a patient subsequently developing glaucoma.
Epidemiology:
Country and region. Prevalence of PEX varies by geography. In Europe, differing levels of PEX were found; 5% in England, 6% in Norway, 4% in Germany, 1% in Greece, and 6% in France. One contrary report suggested that levels of PEX were higher among Greek people. One study of a county in Minnesota found that the prevalence of PEX was 25.9 cases per 100,000 people. It is reportedly high in northern European countries such as Norway, Sweden and Finland, as well as among the Sami people of northern Europe, and high among Arabic populations, but relatively rare among African Americans and Eskimos. In southern Africa, prevalence was found to be 19% of patients in a glaucoma clinic attending to persons of the Bantu tribes.
Epidemiology:
Race. It varies considerably according to race.
Gender. It affects women more than men. One report was that women were three times more likely than men to develop PEX.
Epidemiology:
Age. Older persons are more likely to develop PEX. And persons younger than 50 are highly unlikely to have PEX. A study in Norway found that the prevalence of PEX of persons aged 50–59 was 0.4% while it was 7.9% for persons aged 80–89 years. If a person is going to develop PEX, the average age in which this will happen is between 69 and 75 years, according to the Norwegian study. A second corroborating report suggested that it happens primarily to people 70 and older. While older people are more likely to develop PEX, it is not seen as a "normal" part of aging.
Epidemiology:
Other diseases. Sometimes PEX is associated with the development of medical problems other than merely glaucoma. There are conflicting reports about whether PEX is associated with problems of the heart or brain; one study suggested no correlations while other studies found statistical links with Alzheimer's disease, senile dementia, cerebral atrophy, chronic cerebral ischemia, stroke, transient ischemic attacks, heart disease, and hearing loss.
History:
Pseudoexfoliation syndrome (PEX) was first described by an ophthalmologist from Finland named John G. Lindberg in 1917. He built his own slit lamp to study the condition and reported "grey flakes on the lens capsule", as well as glaucoma in 50% of the eyes, and an "increasing prevalence of the condition with age." Several decades later, an ocular pathologist named Georgiana Dvorak-Theobald suggested the term pseudoexfoliation to distinguish it from a similar ailment which sometimes affected glassblowers called true exfoliation syndrome that was described by Anton Elschnig in 1922. The latter ailment is caused by heat or "infrared-related changes in the anterior lens capsule" and is characterized by "lamellar delamination of the lens capsule." Sometimes the two terms "pseudoexfoliation" and "true exfoliation" are used interchangeably but the more precise usage is to treat each case separately.
Research:
Scientists and doctors are actively exploring how PEX happens, its causes, and how it might be prevented or mitigated. Research activity to explore what causes glaucoma has been characterized as "intense". There has been research into the genetic basis of PEX. One researcher speculated about a possible "two-hit hypothesis" in which a single mutation in the LOXL1 gene puts people at risk for PEX, but that a second still-to-be-found mutation has some effect on the proteins, possibly affecting bonds between chemicals, such that the proteins are more likely to clump together and disrupt the outflow of aqueous humor.
Alternative names:
Exfoliation glaucoma; XFG Pseudoexfoliation glaucoma Pseudoexfoliation of the lens Exfoliation syndrome; XFS | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**RGBE image format**
RGBE image format:
RGBE or Radiance HDR is an image format invented by Gregory Ward Larson for the Radiance rendering system. It stores pixels as one byte each for RGB (red, green, and blue) values with a one byte shared exponent. Thus it stores four bytes per pixel.
Description:
RGBE allows pixels to have the dynamic range and precision of floating-point values in a relatively compact data structure (32 bits per pixel) - often when images are generated from light simulations, the range of per-pixel color intensity values are much greater than will nicely fit into the standard 0..255 (8-bit) range of standard 24-bit image formats. As a result, either bright pixels are clipped to 255, or dim pixels lose numeric precision.
Description:
By using a shared exponent, the RGBE format gains some of the advantages of floating-point values whilst using less than the 32 or 16 bits per color component that would be needed for single precision or half-precision data in the IEEE floating-point format, and with a higher dynamic range than half-precision. An exponent value of 128 maps integer colors [0..255] into [0..1) floating-point space.
Description:
A second variant of the format uses the XYZ color model with a shared exponent. The mime type and file extension are identical, thus applications reading this file format need to interpret the embedded information on the color model.
Greg Ward provides code to handle RGBE files in his Radiance renderer.
Similar formats:
OpenGL mandates support for an analogous RGB9_E5 color (not render) format, where three channels have 9 bits of mantissa each and share 5 bits of exponent.JPEG XT Part 2 (Dolby JPEG-HDR) and Part 7 Profile A are based on the RGBE format.
Similar formats:
RGBM is a format with the exponent replaced with a shared multiplier, while RGBD stores a divider instead. These formats lack the dynamic range of RGBE and logLUV, but are more amenable to a naive approach of linear interpolation on each component. Like RGBE, they can be packaged in any format that accepts a four-channel color model, including ordinary formats like PNG (appropriating the RGBA structure) for 3D textures.A wider variety of color formats take the more conventional route of storing separate floating-point numbers. These include the Xbox '7e3' format (3 10bit floating point color channels, each with 7bits of mantissa and 3 bits of exponent) and the OpenGL R11F_G11F_B10F format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dropped line**
Dropped line:
In poetry, a dropped line is a line which is broken into two lines, but where the second part is indented to the horizontal position it would have had as an unbroken line. For example, in the poem "The Other Side of the River" by Charles Wright, the first and second lines form a dropped line, as do the fourth and fifth lines: It's linkage I'm talking about, and harmonies and structures,And all the various things that lock our wrists to the past.
Dropped line:
Something infinite behind everything appears, and then disappears.
Use in modern poetry:
Dropped lines have a variety of functions and uses. In Robert Denham's words, a dropped line is "a spatial as well as temporal feature, affecting both the eye and ear." It may be used to determine the visual appearance of the line as a whole. Wright, for example, uses dropped lines to reference landscape paintings, especially by Paul Cézanne and Giorgio Morandi, explaining why his use of dropped line "can be seen as imitating the sense of horizontal rhythm prevalent in paintings by Cézanne." Modern poets who are known for using dropped lines include Wright, Carl Phillips, and Edward Hirsch.
Use in dramatic texts:
Lines which are broken between two voices, as in the first two lines in the following scene in Hamlet, may also be called dropped lines. In this case, the line is broken to reflect a change in character while preserving a steady iambic pentameter across the entire line. In classical tragedy this technique of dividing a single verse line between two or more characters is called antilabe and functions "as a means of heightening dramatic tension." It was "frequently utilized by Renaissance dramatists" such as Shakespeare: HAMLETDid you not speak to it?MARCELLUS My lord, I did; But answer made it none: yet once methought It lifted up its head and did address Itself to motion, like as it would speak; | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Holarchy**
Holarchy:
A holon (Greek: ὅλον, from ὅλος, holos, 'whole' and -ον, -on, 'part') is something that is simultaneously a whole in and of itself, as well as a part of a larger whole. In other words, holons can be understood as the constituent part–wholes of a hierarchy. Holons are sometimes discussed in the context of self-organizing holarchic open (SOHO) systems.The word holon (Greek: ὅλον) is a combination of the Greek holos (ὅλος) meaning 'whole', with the suffix -on which denotes a particle or part (as in proton and neutron). According to Arthur Koestler, holons are self-reliant units that possess a degree of independence and can handle contingencies without asking higher authorities for instructions (i.e., they have a degree of autonomy). These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole.
Holarchy:
The holon represents a way to overcome the dichotomy between parts and wholes, as well as a way to account for both the self-assertive and the integrative tendencies of organisms. The term was coined by Arthur Koestler in The Ghost in the Machine (1967). In this way, a holon is a subsystem within a larger system: it is simultaneously an evolving structure while also a part of a greater system composed of other holons. Holons are sometimes discussed in the context of self-organizing holarchic open (SOHO) systems.Prior to introducing the term holon itself, Koestler articulated the concept in The Act of Creation (1964), in which he refers to the relationship between the searches for subjective and objective knowledge: Einstein's space is no closer to reality than Van Gogh's sky. The glory of science is not in a truth more absolute than the truth of Bach or Tolstoy, but in the act of creation itself. The scientist's discoveries impose his own order on chaos, as the composer or painter imposes his; an order that always refers to limited aspects of reality, and is based on the observer's frame of reference, which differs from period to period as a Rembrant nude differs from a nude by Manet.Koestler would finally propose the term holon in The Ghost in the Machine (1967), using it to describe natural organisms as composed of semi-autonomous sub-wholes (or, parts) that are linked in a form of hierarchy, a holarchy, to form a whole. The title of the book itself points to the notion that the entire 'machine' of life and of the universe itself is ever-evolving toward more and more complex states, as if a ghost were operating the machine.The first observation was influenced by a story told to him by Herbert A. Simon—the 'parable of the two watchmakers'—in which Simon concludes that complex systems evolve from simple systems much more rapidly when there are stable intermediate forms present in the evolutionary process compared to when they are not present.
Holarchy:
The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in non-living matter (atomic and molecular structure), living organisms, and social organizations. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Viability theory**
Viability theory:
Viability theory is an area of mathematics that studies the evolution of dynamical systems under constraints on the system state. It was developed to formalize problems arising in the study of various natural and social phenomena, and has close ties to the theories of optimal control and set-valued analysis.
Motivation:
Many systems, organizations, and networks arising in biology and the social sciences do not evolve in a deterministic way, nor even in a stochastic way. Rather they evolve with a Darwinian flavor, driven by random fluctuations but yet constrained to remain "viable" by their environment. Viability theory started in 1976 by translating mathematically the title of the book Chance and Necessity by Jacques Monod to the differential inclusion x′(t)∈F(x(t)) for chance and x(t)∈K for necessity. The differential inclusion is a type of “evolutionary engine” (called an evolutionary system associating with any initial state x a subset of evolutions starting at x. The system is said to be deterministic if this set is made of one and only one evolution and contingent otherwise. Necessity is the requirement that at each instant, the evolution is viable (remains) in the environment K described by viability constraints, a word encompassing polysemous concepts as stability, confinement, homeostasis, adaptation, etc., expressing the idea that some variables must obey some constraints (representing physical, social, biological and economic constraints, etc.) that can never be violated. So, viability theory starts as the confrontation of evolutionary systems governing evolutions and viability constraints that such evolutions must obey. They share common features: Systems designed by human brains, in the sense that agents, actors, decision-makers act on the evolutionary system, as in engineering (control theory and differential games) Systems observed by human brains, more difficult to understand since there is no consensus on the actors piloting the variable, who, at least, may be myopic, lazy but explorers, conservative but opportunist. This is the case of economics, less in finance, where the viability constraints are the scarcity constraints among many other ones, in connectionist networks and/or cooperative games, in population and social dynamics, in neurosciences and some biological issues.Viability theory thus designs and develops mathematical and algorithmic methods for investigating the "adaptation to viability constraints" of evolutions governed by complex systems under uncertainty that are found in many domains involving living beings, from biological evolution to economics, from environmental sciences to financial markets, from control theory and robotics to cognitive sciences. It needed to forge a differential calculus of set-valued maps (set-valued analysis), differential inclusions and differential calculus in metric spaces (mutational analysis).
Viability kernel:
The basic problem of viability theory is to find the "viability kernel" of an environment, the subset of initial states in the environment such that there exists at least one evolution "viable" in the environment, in the sense that at each time, the state of the evolution remains confined to the environment. The second question is then to provide the regulation map selecting such viable evolutions starting from the viability kernel. The viability kernel may be equal to the environment, in which case the environment is called viable under the evolutionary system, and the empty set, in which case it is called a repellor, because all evolutions eventually violate the constraints.
Viability kernel:
The viability kernel assumes that some kind of "decision maker" controls or regulates evolutions of the system. If not, the next problem looks at the "tychastic kernel" (from tyche, meaning chance in Greek) or "invariance kernel", the subset of initial states in the environment such that all evolutions are "viable" in the environment, an alternative way to stochastic differential equations encapsulating the concept of "insurance" against uncertainty, providing a way of eradicating it instead of evaluating it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Intel RealSense**
Intel RealSense:
Intel RealSense Technology, formerly known as Intel Perceptual Computing, was a product range of depth and tracking technologies designed to give machines and devices depth perception capabilities. The technologies, owned by Intel were used in autonomous drones, robots, AR/VR, smart home devices amongst many others broad market products.
The RealSense products were made of Vision Processors, Depth and Tracking Modules, and Depth Cameras, supported by an open source, cross-platform SDK in an attempt to simplify supporting cameras for third party software developers, system integrators, ODMs and OEMs.
History:
Intel began producing hardware and software that utilized depth tracking, gestures, facial recognition, eye tracking, and other technologies under the branding Perceptual Computing in 2013. According to Intel, much of their research into the technologies was focused around "sensory inputs that make [computers] more human like". They initially hoped to begin including 3D cameras that could support their Perceptual Computing as opposed to traditional 2D cameras by late 2014.In 2013, Intel ran a competition among seven teams to create software highlighting the capabilities of its Perceptual Computing technology entitled "Intel Ultimate Coder Challenge: Going Perceptual".In 2014, Intel rebranded their Perceptual Computing line of technology as Intel RealSense.Intel RealSense Group supported multiple depth and tracking technologies including Coded Light Depth, Stereo Depth and Positional Tracking.To address the lack of applications built on the RealSense platform and to promote the platform among software developers, in 2014 Intel organized the "Intel RealSense App Challenge". The winners were awarded large sums of money.
History:
Current Product range In August 2021 Intel announced it was "winding down" its RealSense computer vision division to focus on its core businesses. Specifically the End of Life (EOL) of LiDAR L515, Facial Authentication (F455) and Tracking (T265) product lines were announced. The majority of the Stereo Product Line were still available and new products were released in the meantime.
Product series:
Intel RealSense D400 Product Family As of January 2018, new Intel RealSense D400 Product Family was launched with the Intel RealSense Vision Processor D4, Intel RealSense Depth Module D400 Series, and 2 ready to use depth cameras: Intel RealSense Depth Cameras D435 and D415.
Product series:
Intel RealSense Vision Processor D4 Series The Intel RealSense Vision Processor D4 series are vision processors based on 28 nanometer (nm) process technology to compute real-time stereo depth data. They utilise a depth algorithm that enables more accurate and longer range depth perception than previously available. There are two products in this family: RealSense Vision processor D4 and RealSense Vision Processor D4m.
Product series:
Other products The Intel RealSense Depth Module D400 Series is designed for easy integration to bring 3D into devices and machines. Intel also released the D415 and D435 in 2018. Both cameras feature the RealSense Vision processor D4 and camera sensors. They are supported by the cross-platform and open source Intel RealSense SDK 2.0. The Intel D415 is designed for more precise measurements.
Product series:
Intel RealSense Depth Camera D435 The Intel RealSense Depth Camera D435 is ideal for capturing stereo depth in a variety of applications that help perceive the world in 3D.
Previous Generations:
Previous generations of Intel RealSense depth cameras (F200, R200 and SR300) were implemented in multiple laptop and tablet computers by Asus, HP, Dell, Lenovo, and Acer. Additionally, Razer and Creative offered consumer ready standalone webcams with the Intel RealSense camera built into the design.: Razer Stargazer and the Creative BlasterX Senz3D.
Intel RealSense 3D Camera (Front F200) This is a stand-alone camera that can be attached to a desktop or laptop computer. It is intended to be used for natural gesture-based interaction, face recognition, immersive, video conferencing and collaboration, gaming and learning and 3D scanning. There was also version of this camera to be embedded into laptop computers.
Previous Generations:
Intel RealSense Snapshot Snapshot is a camera intended to be built into tablet computers and possibly smartphones. Its intended uses include taking photographs and performing after the fact refocusing, distance measurements, and applying motion photo filters. The refocus feature differs from a plenoptic camera in that RealSense Snapshot takes pictures with large depth of field so that initially the whole picture is in focus and then in software it selectively blurs parts of the image depending on their distance. The Dell Venue 8 7000 Series Android tablet is equipped with this camera.
Previous Generations:
Intel RealSense 3D Camera (Rear R200) Rear-mounted camera for Microsoft Surface or a similar tablet, like the HP Spectre X2. This camera is intended for augmented reality applications, content creation, and object scanning. Its depth accuracy is on the order of millimeters and its range is up to 6.0 meters. The R200 is a stereo camera and is able to obtain accurate depth outdoors as well as indoors.
Reception:
In an early preview article in 2015, PC World's Mark Hachman concluded that RealSense is an enabling technology that will be largely defined by the software that will take advantage of its features. He noted that as of the time the article was written, the technology was new and there was no such software.
Product Technical Specifications:
Specifications: Intel RealSense Depth Camera D415, D435 and D455 Specifications: Intel RealSense Vision Processor D4 Series (Not available separately as these are just the bare PCB Vision Processor boards, only used as basis for the RealSense Depth Camera series) Specifications: Intel Stereo DepthModule SKUs (Not available separately as these are just the bare PCB Depth Sensor Modules, only used as basis for the RealSense Depth Camera series) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Slow vertex response**
Slow vertex response:
The slow vertex response (also called SVR or V potential) is an electrochemical signal associated with electrophysiological recordings of the auditory system, specifically Auditory evoked potentials (AEPs). The SVR of a normal human being recorded with surface electrodes can be found at the end of a recorded AEP waveform between the latencies 50-500ms. Detection of SVR is used to estimate thresholds for hearing pathways. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CD8**
CD8:
CD8 (cluster of differentiation 8) is a transmembrane glycoprotein that serves as a co-receptor for the T-cell receptor (TCR). Along with the TCR, the CD8 co-receptor plays a role in T cell signaling and aiding with cytotoxic T cell-antigen interactions.
Like the TCR, CD8 binds to a major histocompatibility complex (MHC) molecule, but is specific for the MHC class I protein.There are two isoforms of the protein, alpha and beta, each encoded by a different gene. In humans, both genes are located on chromosome 2 in position 2p12.
Tissue distribution:
The CD8 co-receptor is predominantly expressed on the surface of cytotoxic T cells, but can also be found on natural killer cells, cortical thymocytes, and dendritic cells. The CD8 molecule is a marker for cytotoxic T cell population. It is expressed in T cell lymphoblastic lymphoma and hypo-pigmented mycosis fungoides.
Structure:
To function, CD8 forms a dimer, consisting of a pair of CD8 chains. The most common form of CD8 is composed of a CD8-α and CD8-β chain, both members of the immunoglobulin superfamily with an immunoglobulin variable (IgV)-like extracellular domain connected to the membrane by a thin stalk, and an intracellular tail. Less-common homodimers of the CD8-α chain are also expressed on some cells. The molecular weight of each CD8 chain is about 34 kDa. The structure of the CD8 molecule was determined by Leahy, D.J., Axel, R., and Hendrickson, W.A. by X-ray Diffraction at a 2.6A resolution. The structure was determined to have an immunoglobulin-like beta-sandwich folding and 114 amino acid residues. 2% of the protein is wound into α-helices and 46% into β-sheets, with the remaining 52% of the molecules remaining in the loop portions.
Function:
The extracellular IgV-like domain of CD8-α interacts with the α3 portion of the Class I MHC molecule. This affinity keeps the T cell receptor of the cytotoxic T cell and the target cell bound closely together during antigen-specific activation. Cytotoxic T cells with CD8 surface protein are called CD8+ T cells. The main recognition site is a flexible loop at the α3 domain of an MHC molecule. This was discovered by doing mutational analyses. The flexible α3 domain is located between residues 223 and 229 in the genome. In addition to aiding with cytotoxic T cell antigen interactions the CD8 co-receptor also plays a role in T cell signaling. The cytoplasmic tails of the CD8 co-receptor interact with Lck (lymphocyte-specific protein tyrosine kinase). Once the T cell receptor binds its specific antigen Lck phosphorylates the cytoplasmic CD3 and ζ-chains of the TCR complex which initiates a cascade of phosphorylation eventually leading to activation of transcription factors like NFAT, NF-κB, and AP-1 which affect the expression of certain genes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oxicam**
Oxicam:
Oxicam is a class of non-steroidal anti-inflammatory drugs (NSAIDs), meaning that they have anti-inflammatory, analgesic, and antipyretic therapeutic effects. Oxicams bind closely to plasma proteins. Most oxicams are unselective inhibitors of the cyclooxygenase (COX) enzymes. The exception is meloxicam with a slight (10:1) preference for COX-2, which, however, is only clinically relevant at low doses.The most popular drug of the oxicam class is piroxicam. Other examples include: ampiroxicam, droxicam, pivoxicam, tenoxicam, lornoxicam, and meloxicam.
Oxicam:
Isoxicam has been suspended as a result of fatal skin reactions.
Chemistry:
The physico-chemical characteristics of these molecules vary greatly depending upon the environment.In contrast to most other NSAIDs, oxicams are not carboxylic acids. They are tautomeric, and can exist as a number of tautomers (keto-enol tautomerism), here exemplified by piroxicam:
Side effects:
The oxicams are associated with drug-related erythema multiforme (EM), Stevens–Johnson syndrome, and toxic epidermal necrolysis (TEN). This association is one of the reasons oxicams are not regularly prescribed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Relugolix/estradiol/norethisterone acetate**
Relugolix/estradiol/norethisterone acetate:
Relugolix/estradiol/norethisterone acetate, sold under the brand names Myfembree and Ryeqo, is a fixed-dose combination hormonal medication which is used for the treatment of heavy menstrual bleeding associated with uterine leiomyomas (fibroids) and for moderate to severe pain associated with endometriosis. It contains relugolix, an orally active gonadotropin-releasing hormone antagonist (GnRH antagonist), estradiol, an estrogen, and norethisterone acetate, a progestin. The medication is taken by mouth.The most common side effects of the medication include hot flushes, excessive sweating or night sweats, uterine bleeding, hair loss or thinning, and decreased interest in sex.The medication was approved for medical use in the United States in May 2021, and in the European Union in July 2021.
Medical uses:
The medication is used in the treatment of heavy menstrual bleeding associated with uterine fibroids and for moderate to severe pain associated with endometriosis, both in premenopausal women.
Available forms The medication is formulated as an oral tablet containing a fixed-dose combination of 40 mg relugolix, 1 mg estradiol, and 0.5 mg norethisterone acetate.
Pharmacology:
Pharmacodynamics Relugolix acts as an GnRH antagonist, or an antagonist of the gonadotropin-releasing hormone receptor. Estradiol is an estrogen, or an agonist of the estrogen receptors, whereas norethisterone acetate is a progestin (synthetic progestogen), or an agonist of the progesterone receptors. Relugolix suppresses ovarian sex hormone production, whereas estradiol and norethisterone acetate provide hormonal add-back to reduce hypogonadal and menopausal-like symptoms.
History:
The medication was approved for medical use in the United States in May 2021.On 20 May 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Ryeqo, intended for the treatment of symptoms of uterine fibroids. The applicant for this medicinal product is Gedeon Richter Plc. The combination was approved for medical use in the European Union in July 2021.In August 2022, the medication was approved for the treatment of moderate to severe pain associated with endometriosis in the United States.
Research:
The combination is also under development as a birth control pill for prevention of pregnancy in premenopausal women. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CCL20**
CCL20:
Chemokine (C-C motif) ligand 20 (CCL20) or liver activation regulated chemokine (LARC) or Macrophage Inflammatory Protein-3 (MIP3A) is a small cytokine belonging to the CC chemokine family. It is strongly chemotactic for lymphocytes and weakly attracts neutrophils. CCL20 is implicated in the formation and function of mucosal lymphoid tissues via chemoattraction of lymphocytes and dendritic cells towards the epithelial cells surrounding these tissues. CCL20 elicits its effects on its target cells by binding and activating the chemokine receptor CCR6.Gene expression of CCL20 can be induced by microbial factors such as lipopolysaccharide (LPS), and inflammatory cytokines such as tumor necrosis factor and interferon-γ, and down-regulated by IL-10. CCL20 is expressed in several tissues with highest expression observed in peripheral blood lymphocytes, lymph nodes, liver, appendix, and fetal lung and lower levels in thymus, testis, prostate and gut. The gene for CCL20 (scya20) is located on chromosome 2 in humans.Recent research in an animal model of multiple sclerosis known as experimental autoimmune encephalitis (EAE) demonstrated that regional neural activation can create "gates" for pathogenic CD4+ T cells to enter the CNS by increasing CCL20 expression, especially at L5. Sensory nerve stimulation, elicited by using muscles in the leg or electrical stimulation as in Arima et al., 2012, activates sympathetic neurons whose axons run through the dorsal root ganglia containing cell bodies of the stimulated afferent sensory nerve. Sympathetic neuronal activity activates IL-6 amplifier resulting in increased regional CCL20 expression and subsequent pathogenic CD4+ T cell accumulation at the same spinal cord level. CCL20 expression was observed to be dependent on IL-6 amplifier activation, which is dependent on NF-κB and STAT3 activation. This research provides evidence for a critical role for CCL20 in autoimmune pathogenesis of the central nervous system. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Berendsen thermostat**
Berendsen thermostat:
The Berendsen thermostat is an algorithm to re-scale the velocities of particles in molecular dynamics simulations to control the simulation temperature.
Basic description:
In this scheme, the system is weakly coupled to a heat bath with some temperature. The thermostat suppresses fluctuations of the kinetic energy of the system and therefore cannot produce trajectories consistent with the canonical ensemble. The temperature of the system is corrected such that the deviation exponentially decays with some time constant τ .dTdt=T0−Tτ Though the thermostat does not generate a correct canonical ensemble (especially for small systems), for large systems on the order of hundreds or thousands of atoms/molecules, the approximation yields roughly correct results for most calculated properties. The scheme is widely used due to the efficiency with which it relaxes a system to some target (bath) temperature. In many instances, systems are initially equilibrated using the Berendsen scheme, while properties are calculated using the widely known Nosé–Hoover thermostat, which correctly generates trajectories consistent with a canonical ensemble. However, the Berendsen thermostat can result in the flying ice cube effect, an artifact which can be eliminated by using the more rigorous Bussi–Donadio–Parrinello thermostat; for this reason, it has been recommended that usage of the Berendsen thermostat be discontinued in almost all cases except for replication of prior studies. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Human body temperature**
Human body temperature:
Normal human body-temperature (normothermia, euthermia) is the typical temperature range found in humans. The normal human body temperature range is typically stated as 36.5–37.5 °C (97.7–99.5 °F).Human body temperature varies. It depends on sex, age, time of day, exertion level, health status (such as illness and menstruation), what part of the body the measurement is taken at, state of consciousness (waking, sleeping, sedated), and emotions. Body temperature is kept in the normal range by a homeostatic function known as thermoregulation, in which adjustment of temperature is triggered by the central nervous system.
Methods of measurement:
Taking a human's temperature is an initial part of a full clinical examination. There are various types of medical thermometers, as well as sites used for measurement, including: In the rectum (rectal temperature) In the mouth (oral temperature) Under the arm (axillary temperature) In the ear (tympanic temperature) On the skin of the forehead over the temporal artery Using heat flux sensors Variations Temperature control (thermoregulation) is a homeostatic mechanism that keeps the organism at optimum operating temperature, as the temperature affects the rate of chemical reactions. In humans, the average internal temperature is widely accepted to be 37 °C (98.6 °F), a "normal" temperature established in the 1800s. But newer studies show that average internal temperature for men and women is 36.4 °C (97.5 °F). No person always has exactly the same temperature at every moment of the day. Temperatures cycle regularly up and down through the day, as controlled by the person's circadian rhythm. The lowest temperature occurs about two hours before the person normally wakes up. Additionally, temperatures change according to activities and external factors.In addition to varying throughout the day, normal body temperature may also differ as much as 0.5 °C (0.9 °F) from one day to the next, so that the highest or lowest temperatures on one day will not always exactly match the highest or lowest temperatures on the next day.
Methods of measurement:
Normal human body temperature varies slightly from person to person and by the time of day. Consequently, each type of measurement has a range of normal temperatures. The range for normal human body temperatures, taken orally, is 36.8 ± 0.5 °C (98.2 ± 0.9 °F). This means that any oral temperature between 36.3 and 37.3 °C (97.3 and 99.1 °F) is likely to be normal.The normal human body temperature is often stated as 36.5–37.5 °C (97.7–99.5 °F). In adults a review of the literature has found a wider range of 33.2–38.2 °C (91.8–100.8 °F) for normal temperatures, depending on the gender and location measured.Reported values vary depending on how it is measured: oral (under the tongue): 36.8±0.4 °C (98.2±0.72 °F), internal (rectal, vaginal): 37.0 °C (98.6 °F). A rectal or vaginal measurement taken directly inside the body cavity is typically slightly higher than oral measurement, and oral measurement is somewhat higher than skin measurement. Other places, such as under the arm or in the ear, produce different typical temperatures. While some people think of these averages as representing normal or ideal measurements, a wide range of temperatures has been found in healthy people. The body temperature of a healthy person varies during the day by about 0.5 °C (0.9 °F) with lower temperatures in the morning and higher temperatures in the late afternoon and evening, as the body's needs and activities change. Other circumstances also affect the body's temperature. The core body temperature of an individual tends to have the lowest value in the second half of the sleep cycle; the lowest point, called the nadir, is one of the primary markers for circadian rhythms. The body temperature also changes when a person is hungry, sleepy, sick, or cold.
Methods of measurement:
Natural rhythms Body temperature normally fluctuates over the day following circadian rhythms, with the lowest levels around 4 a.m. and the highest in the late afternoon, between 4:00 and 6:00 p.m. (assuming the person sleeps at night and stays awake during the day). Therefore, an oral temperature of 37.3 °C (99.1 °F) would, strictly speaking, be a normal, healthy temperature in the afternoon but not in the early morning. An individual's body temperature typically changes by about 0.5 °C (0.9 °F) between its highest and lowest points each day.Body temperature is sensitive to many hormones, so women have a temperature rhythm that varies with the menstrual cycle, called a circamensal rhythm. A woman's basal body temperature rises sharply after ovulation, as estrogen production decreases and progesterone increases. Fertility awareness programs use this change to identify when a woman has ovulated to achieve or avoid pregnancy. During the luteal phase of the menstrual cycle, both the lowest and the average temperatures are slightly higher than during other parts of the cycle. However, the amount that the temperature rises during each day is slightly lower than typical, so the highest temperature of the day is not very much higher than usual. Hormonal contraceptives both suppress the circamensal rhythm and raise the typical body temperature by about 0.6 °C (1.1 °F).Temperature also may vary with the change of seasons during each year. This pattern is called a circannual rhythm. Studies of seasonal variations have produced inconsistent results. People living in different climates may have different seasonal patterns.It has been found that physically active individuals have larger changes in body temperature throughout the day. Physically active people have been reported to have lower body temperatures than their less active peers in the early morning and similar or higher body temperatures later in the day.With increased age, both average body temperature and the amount of daily variability in the body temperature tend to decrease. Elderly patients may have a decreased ability to generate body heat during a fever, so even a somewhat elevated temperature can indicate a serious underlying cause in geriatrics. One study suggested that the average body temperature has also decreased since the 1850s. The study's authors believe the most likely explanation for the change is a reduction in inflammation at the population level due to decreased chronic infections and improved hygiene.
Methods of measurement:
Measurement methods Different methods used for measuring temperature produce different results. The temperature reading depends on which part of the body is being measured. The typical daytime temperatures among healthy adults are as follows: Temperature in the anus (rectum/rectal), vagina, or in the ear (tympanic) is about 37.5 °C (99.5 °F) Temperature in the mouth (oral) is about 36.8 °C (98.2 °F) Temperature under the arm (axillary) is about 36.5 °C (97.7 °F)Generally, oral, rectal, gut, and core body temperatures, although slightly different, are well-correlated.Oral temperatures are influenced by drinking, chewing, smoking, and breathing with the mouth open. Mouth breathing, cold drinks or food, reduce oral temperatures; hot drinks, hot food, chewing, and smoking raise oral temperatures.Each measurement method also has different normal ranges depending on sex.
Methods of measurement:
Infrared thermometer As of 2016 reviews of infrared thermometers have found them to be of variable accuracy. This includes tympanic infrared thermometers in children.
Methods of measurement:
Variations due to outside factors Sleep disturbances also affect temperatures. Normally, body temperature drops significantly at a person's normal bedtime and throughout the night. Short-term sleep deprivation produces a higher temperature at night than normal, but long-term sleep deprivation appears to reduce temperatures. Insomnia and poor sleep quality are associated with smaller and later drops in body temperature. Similarly, waking up unusually early, sleeping in, jet lag and changes to shift work schedules may affect body temperature.
Concept:
Fever A temperature setpoint is the level at which the body attempts to maintain its temperature. When the setpoint is raised, the result is a fever. Most fevers are caused by infectious disease and can be lowered, if desired, with antipyretic medications.
Concept:
An early morning temperature higher than 37.2 °C (99.0 °F) or a late afternoon temperature higher than 37.7 °C (99.9 °F) is normally considered a fever, assuming that the temperature is elevated due to a change in the hypothalamus's setpoint. Lower thresholds are sometimes appropriate for elderly people. The normal daily temperature variation is typically 0.5 °C (0.90 °F), but can be greater among people recovering from a fever.An organism at optimum temperature is considered afebrile, meaning "without fever". If temperature is raised, but the setpoint is not raised, then the result is hyperthermia.
Concept:
Hyperthermia Hyperthermia occurs when the body produces or absorbs more heat than it can dissipate. It is usually caused by prolonged exposure to high temperatures. The heat-regulating mechanisms of the body eventually become overwhelmed and unable to deal effectively with the heat, causing the body temperature to climb uncontrollably. Hyperthermia at or above about 40 °C (104 °F) is a life-threatening medical emergency that requires immediate treatment. Common symptoms include headache, confusion, and fatigue. If sweating has resulted in dehydration, then the affected person may have dry, red skin.
Concept:
In a medical setting, mild hyperthermia is commonly called heat exhaustion or heat prostration; severe hyperthermia is called heat stroke. Heatstroke may come on suddenly, but it usually follows the untreated milder stages. Treatment involves cooling and rehydrating the body; fever-reducing drugs are useless for this condition. This may be done by moving out of direct sunlight to a cooler and shaded environment, drinking water, removing clothing that might keep heat close to the body, or sitting in front of a fan. Bathing in tepid or cool water, or even just washing the face and other exposed areas of the skin, can be helpful.
Concept:
With fever, the body's core temperature rises to a higher temperature through the action of the part of the brain that controls the body temperature; with hyperthermia, the body temperature is raised without the influence of the heat control centers.
Hypothermia In hypothermia, body temperature drops below that required for normal metabolism and bodily functions. In humans, this is usually due to excessive exposure to cold air or water, but it can be deliberately induced as a medical treatment. Symptoms usually appear when the body's core temperature drops by 1–2 °C (1.8–3.6 °F) below normal temperature.
Concept:
Basal body temperature Basal body temperature is the lowest temperature attained by the body during rest (usually during sleep). It is generally measured immediately after awakening and before any physical activity has been undertaken, although the temperature measured at that time is somewhat higher than the true basal body temperature. In women, temperature differs at various points in the menstrual cycle, and this can be used in the long term to track ovulation both to aid conception or avoid pregnancy. This process is called fertility awareness.
Concept:
Core temperature Core temperature, also called core body temperature, is the operating temperature of an organism, specifically in deep structures of the body such as the liver, in comparison to temperatures of peripheral tissues. Core temperature is normally maintained within a narrow range so that essential enzymatic reactions can occur. Significant core temperature elevation (hyperthermia) or depression (hypothermia) over more than a brief period of time is incompatible with human life.
Concept:
Temperature examination in the heart, using a catheter, is the traditional gold standard measurement used to estimate core temperature (oral temperature is affected by hot or cold drinks, ambient temperature fluctuations as well as mouth-breathing). Since catheters are highly invasive, the generally accepted alternative for measuring core body temperature is through rectal measurements. Rectal temperature is expected to be approximately 1 Fahrenheit (or 0.55 Celsius) degree higher than an oral temperature taken on the same person at the same time. Ear thermometers measure temperature from the tympanic membrane using infrared sensors and also aim to measure core body temperature, since the blood supply of this membrane is directly shared with the brain. However, this method of measuring body temperature is not as accurate as rectal measurement and has a low sensitivity for fever, failing to determine three or four out of every ten fever measurements in children. Ear temperature measurement may be acceptable for observing trends in body temperature but is less useful in consistently identifying and diagnosing fever.
Concept:
Until recently, direct measurement of core body temperature required either an ingestible device or surgical insertion of a probe. Therefore, a variety of indirect methods have commonly been used as the preferred alternative to these more accurate albeit more invasive methods. The rectal or vaginal temperature is generally considered to give the most accurate assessment of core body temperature, particularly in hypothermia. In the early 2000s, ingestible thermistors in capsule form were produced, allowing the temperature inside the digestive tract to be transmitted to an external receiver; one study found that these were comparable in accuracy to rectal temperature measurement. More recently, a new method using heat flux sensors have been developed. Several research papers show that its accuracy is similar to the invasive methods.
Temperature variation:
Hot 44 °C (111.2 °F) or more – Almost certainly death will occur; however, people have been known to survive up to 46.5 °C (115.7 °F).
43 °C (109.4 °F) – Normally death, or there may be serious brain damage, continuous convulsions, and shock. Cardio-respiratory collapse will likely occur.
42 °C (107.6 °F) – Subject may turn pale or remain flushed and red. They may become comatose, be in severe delirium, vomiting, and convulsions can occur.
41 °C (105.8 °F) – (Medical emergency) – Fainting, vomiting, severe headache, dizziness, confusion, hallucinations, delirium, and drowsiness can occur. There may also be palpitations and breathlessness.
40 °C (104 °F) – Fainting, dehydration, weakness, vomiting, headache, breathlessness, and dizziness may occur as well as profuse sweating.
39 °C (102.2 °F) – Severe sweating, flushed, and red. Fast heart rate and breathlessness. There may be exhaustion accompanying this. Children and people with epilepsy may suffer convulsions at this temperature.
38 °C (100.4 °F) – (Classed as hyperthermia if not caused by a fever) – Feeling hot, sweating, feeling thirsty, feeling very uncomfortable, slightly hungry. If this is caused by fever, there may also be chills.
Normal 36.5–37.5 °C (97.7–99.5 °F) is a typically reported range for normal body temperature.
Cold 36 °C (96.8 °F) – Feeling cold, mild to moderate shivering. Body temperature may drop this low during sleep. This can be a normal body temperature for sleeping.
35 °C (95 °F) – (Hypothermia is less than 35 °C (95 °F)) – Intense shivering, numbness and bluish/grayness of the skin. There is the possibility of heart irritability.
34 °C (93.2 °F) – Severe shivering, loss of movement of fingers, blueness, and confusion. Some behavioral changes may take place.
33 °C (91.4 °F) – Moderate to severe confusion, sleepiness, depressed reflexes, progressive loss of shivering, slow heartbeat, shallow breathing. Shivering may stop. The subject may be unresponsive to certain stimuli.
32 °C (89.6 °F) – (Medical emergency) – Hallucinations, delirium, complete confusion, extreme sleepiness that is progressively becoming comatose. Shivering is absent (subject may even think they are hot). Reflex may be absent or very slight.
31 °C (87.8 °F) – Comatose, very rarely conscious. No or slight reflexes. Very shallow breathing and slow heart rate. Possibility of serious heart rhythm problems.
28 °C (82.4 °F) – Severe heart rhythm disturbances are likely and breathing may stop at any time. The person may appear to be dead.
Temperature variation:
24–26 °C (75.2–78.8 °F) or less – Death usually occurs due to irregular heart beat or respiratory arrest; however, some patients have been known to survive with body temperatures as low as 13.7 °C (56.7 °F).There are non-verbal corporal cues that can hint at an individual experiencing a low body temperature, which can be used for those with dysphasia or infants. Examples of non-verbal cues of coldness include stillness and being lethargic concerning kinesiological movement, sneezing, unusual paleness of skin among light-skinned people, and, among males, shrinkage, and contraction of the scrotum.
Effect of environment:
Environmental conditions, primarily temperature and humidity, affect the ability of the mammalian body to thermoregulate. The psychrometric temperature, of which the wet-bulb temperature is the main component, largely limits thermoregulation. It was thought that a wet-bulb temperature of about 35°C was the highest sustained value consistent with human life.
Effect of environment:
A 2022 study on the effect of heat on young people found that the critical wet-bulb temperature at which heat stress can no longer be compensated, Twb,crit, in young, healthy adults performing tasks at modest metabolic rates mimicking basic activities of daily life was much lower than the 35°C usually assumed, at about 30.55°C in 36–40°C humid environments, but progressively decreased in hotter, dry ambient environments.At low temperatures the body thermoregulates by generating heat, but this becomes unsustainable at extremely low temperatures.
Historical understanding:
In the 19th century, most books quoted "blood heat" as 98 °F, until a study published the mean (but not the variance) of a large sample as 36.88 °C (98.38 °F). Subsequently, that mean was widely quoted as "37 °C or 98.4 °F" until editors realized 37 °C is equal to 98.6 °F, not 98.4 °F. The 37 °C value was set by German physician Carl Reinhold August Wunderlich in his 1868 book, which put temperature charts into widespread clinical use. Dictionaries and other sources that quoted these averages did add the word "about" to show that there is some variance, but generally did not state how wide the variance is. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acid hydrolase**
Acid hydrolase:
An acid hydrolase is an enzyme that works best at acidic pHs. It is commonly located in lysosomes, which are acidic on the inside. Acid hydrolases may be nucleases, proteases, glycosidases, lipases, phosphatases, sulfatases and phospholipases and make up the approximately 50 degradative enzymes of the lysosome that break apart biological matter.types of Acid Hydrolase: -Nucleases (P1 from Penicillium citrinum, used in the food industry for taste enhancement or present in Gouda cheese)-Lipase: for example lysosomal acid lipase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Babylonian Almanac**
Babylonian Almanac:
The Babylonian Almanac is a source of information for predictions, i.e., an almanac, made for astronomical phenomena for the specific years contained within it.The work comes entirely from manuscripts, of which fifty-two were discovered. Of these, there are significant variations in certain lines of the ancient texts. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Framing effect (psychology)**
Framing effect (psychology):
The framing effect is a cognitive bias where people decide between options based on whether the options are presented with positive or negative connotations. Individuals have a tendency to make risk-avoidant choices when options are positively framed, while selecting more loss-avoidant options when presented with a negative frame. In studies of the bias, options are presented in terms of the probability of either losses or gains. While differently expressed, the options described are in effect identical. Gain and loss are defined in the scenario as descriptions of outcomes, for example, lives lost or saved, patients treated or not treated, monetary gains or losses.Prospect theory posits that a loss is more significant than the equivalent gain, that a sure gain (certainty effect and pseudocertainty effect) is favored over a probabilistic gain, and that a probabilistic loss is preferred to a definite loss. One of the dangers of framing effects is that people are often provided with options within the context of only one of the two frames.The concept helps to develop an understanding of frame analysis within social movements, and also in the formation of political opinion where spin plays a large role in political opinion polls that are framed to encourage a response beneficial to the organization that has commissioned the poll. It has been suggested that the use of the technique is discrediting political polls themselves. The effect is reduced, or even eliminated, if ample credible information is provided to people.
Research:
Amos Tversky and Daniel Kahneman explored how different phrasing affected participants' responses to a choice in a hypothetical life and death situation in 1981.Participants were asked to choose between two treatments for 600 people affected by a deadly disease. Treatment A was predicted to result in 400 deaths, whereas treatment B had a 33% chance that no one would die but a 66% chance that everyone would die. This choice was then presented to participants either with positive framing, i.e. how many people would live, or with negative framing, i.e. how many people would die.
Research:
Treatment A was chosen by 72% of participants when it was presented with positive framing ("saves 200 lives") dropping to 22% when the same choice was presented with negative framing ("400 people will die").
This effect has been shown in other contexts: 93% of PhD students registered early when a penalty fee for late registration was emphasized, but only 67% did so when this was presented as a discount for earlier registration.
62% of people disagreed with allowing "public condemnation of democracy", but only 46% of people agreed that it was right to "forbid public condemnation of democracy".
More people will support an economic policy if the employment rate is emphasised than when the associated unemployment rates is highlighted.
It has been argued that pretrial detention may increase a defendant's willingness to accept a plea bargain, since imprisonment, rather than freedom, will be his baseline, and pleading guilty will be viewed as an event that will cause his earlier release rather than as an event that will put him in prison.
Research:
Extensionality violation In logic, extensionality requires "two formulas which have the same truth-value under any truth-assignments to be mutually substitutable salva veritate in a sentence that contains one of these formulas." Put simply, objects that have the same external properties are equal. This principle, applied to decision making, suggests that making a decision in a problem should not be affected by how the problem is described. For example, varied descriptions of the same decision problem should not give rise to different decisions, due to the extensionality principle. If judgments are made on the basis of irrelevant information as described, that is called an extensionality violation.
Developmental factors:
The framing effect has consistently been shown to be one of the largest biases in decision making. In general, susceptibility to framing effects increases with age. Age difference factors are particularly important when considering health care and financial decisions.However, the framing effect seems to disappear when encountering it in a foreign (non-native) language.: 246 One explanation of this disappearance is that a non-native language provides greater cognitive and emotional distance than one's native tongue. A foreign language is also processed less automatically than a native tongue. This leads to more deliberation, which can affect decision making, resulting in decisions that are more systematic.
Developmental factors:
Childhood and adolescence Framing effects in decision-making become stronger as children age. This is partially because qualitative reasoning increases with age. While preschoolers are more likely to make decisions based on quantitative properties, such as probability of an outcome, elementary schoolers and adolescents become progressively more likely to reason qualitatively, opting for a sure option in a gain frame and a risky option in a loss frame regardless of probabilities. The increase in qualitative thinking is related to an increase in "gist based" thinking that occurs over a lifetime.However, qualitative reasoning, and thus susceptibility to framing effects, is still not as strong in adolescents as in adults, and adolescents are more likely than adults to choose the risky option under both the gain and loss frames of a given scenario. One explanation for adolescent tendencies toward risky choices is that they lack real-world experience with negative consequences, and thus over-rely on conscious evaluation of risks and benefits, focusing on specific information and details or quantitative analysis. This reduces influence of framing effects and leads to greater consistency across frames of a given scenario. Children between the ages of 10 and 12 are more likely to take risks and show framing effects, while younger children only considered the quantitative differences between the two options presented.
Developmental factors:
Young adulthood Younger adults are more likely than older adults to be enticed by risk-taking when presented with loss frame trials.In multiple studies of undergraduate students, researchers have found that students are more likely to prefer options framed positively. For example, they are more likely to enjoy meat labeled 75% lean meat as opposed to 25% fat, or use condoms advertised as being 95% effective as opposed to having a 5% risk of failure.Young adults are especially susceptible to framing effects when presented with an ill-defined problem in which there is no correct answer and individuals must arbitrarily determine what information they consider relevant. For example, undergraduate students are more willing to purchase an item such as a movie ticket after losing an amount equivalent to the item's cost than after losing the item itself.
Developmental factors:
Older adulthood The framing effect is claimed to be greater in older adults than in younger adults or adolescents. This claim may be a result of enhanced negativity bias, though some sources claim that the negativity bias actually decreases with age.Another possible cause is that older adults have fewer cognitive resources available to them and are more likely to default to less cognitively demanding strategies when faced with a decision. They tend to rely on easily accessible information, or frames, regardless of whether that information is relevant to making the decision in question. Several studies have shown that younger adults will make less biased decisions than older adults because they base their choices on interpretations of patterns of events and can better employ decision making strategies that require cognitive resources like working-memory skills. Older adults, on the other hand, make choices based on immediate reactions to gains and losses.Older adults' lack of cognitive resources, such as flexibility in decision making strategies, may cause older adults to be influenced by emotional frames more so than younger adults or adolescents. In addition, as individuals age, they make decisions more quickly than their younger counterparts. It is significant that, when prompted to do so, older adults will often make a less biased decision with reevaluation of their original choice.The increase in framing effects among older adults has important implications, especially in medical contexts. Older adults are influenced heavily by the inclusion or exclusion of extraneous details, meaning they are likely to make serious medical decisions based on how doctors frame the two options rather than the qualitative differences between the options, causing older adults to inappropriately form their choices.When considering cancer treatments, framing can shift older adults' focus from short- to long-term survival under a negative and positive frame, respectively. When presented with treatment descriptions described in positive, negative, or neutral terms, older adults are significantly more likely to agree to a treatment when it is positively described than they are to agree to the same treatment when it is described neutrally or negatively. Additionally, framing often leads to inconsistency in choice: a change in description qualities after an initial choice is made can cause older adults to revoke their initial decision in favor of an alternative option. Older adults also remember positively framed statements more accurately than negatively framed statements. This has been demonstrated by evaluating older adults' recall of statements in pamphlets about health care issues. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rixty**
Rixty:
Rixty was an alternative payment system that let domestic and international users spend cash and coins for online games and digital content.
Rixty was a subsidiary of MOL AccessPortal Sdn Bhd (MOL) which was one of Asia's leading payment service providers.
History:
Rixty, Inc. was founded in September 2007 and is headquartered in San Francisco, California.
Rixty announced a majority investment by MOL Global in 2012.In 2019, The Rixty website has now been replaced by Razer Gold. Users who try visiting the website will get a message saying that they upgraded from Rixty to Razer Gold | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Chip on shoulder**
Chip on shoulder:
To have a chip on one's shoulder is to hold a grudge or grievance that readily provokes disputation.
History:
This idiom traces its roots back to a custom that was known in North America since the early 19th century. The New York newspaper Long Island Telegraph reported on 20 May 1830 "when two churlish boys were determined to fight, a chip [of wood] would be placed on the shoulder of one, and the other demanded to knock it off at his peril". A similar notion is mentioned in the issue of the Onondaga Standard of Syracuse, New York on 8 December 1830: "'He waylay me', said I, 'the mean sneaking fellow—I am only afraid that he will sue me for damages. Oh! if I only could get him to knock a chip off my shoulder, and so get round the law, I would give him one of the soundest thrashings he ever had'." Some time later in 1855, the phrase "chip on his shoulder" appeared in the Weekly Oregonian, stating "Leland, in his last issue, struts out with a chip on his shoulder, and dares Bush to knock it off". In American author Mark Twain's 1898 manuscript of Schoolhouse Hill, character Tom Sawyer states his knowledge of the phrase and custom when he says, "[I]f you want your fuss, and can't wait till recess, which is regular, go at it right and fair; put a chip on your shoulder and dare him to knock it off."In Canada, the custom is well described at St. Peter Claver's Indian Residential School for Ojibway boys in the town of Spanish, Ontario: By custom, the challenger, usually one of the intermediates, anxious to prove his worth or avenge some wrong, would deliberately seek out his foe with a wood chip or flat stone on his shoulder, placed there either by his own hand or by that of somebody else.
History:
The challenger might further provoke his opponent by issuing a dare for him to knock off the chip. The opponent might then display his bravery and contempt by brushing the cheek of the challenger lightly as he did so. In more formal cases, a second might take the chip and present the chip to his man who would then place it on his own shoulder. The boys would then square off and fistfight like boxers.
In popular culture:
Literal occurrences Morley Callaghan's 1948 novella Luke Baldwin's Vow details a tense exchange between Luke and his frenemy Elmer, in which a chip is moved from the shoulder of one boy to the other before being struck off.In the 1968 movie The Shakiest Gun in the West, after running into Arnold the Kid, Dr. Jesse Heywood places a chip on his own shoulder and says, "Ippity-doo, kanaba dip, double dare, knock off the chip".In a 1970s commercial for a household battery, Robert Conrad dared the viewer to knock an Eveready battery off his shoulder.
In popular culture:
Figurative occurrences In 1950 Hank Williams released the song "I Just Don't Like This Kind of Living", containing the line "Why don't you act a little older, And get that chip up off your shoulder?".In 1964, The Beatles released a song, "I'll Cry Instead", containing the line "I've got a chip on my shoulder that's bigger than my feet".In 1969, The Kinks song "Australia" says "Nobody's got a chip on their shoulder".In 1979, AC/DC released "Shot Down in Flames", a song containing the line "When a guy with a chip on his shoulder said: Toss off buddy she's mine".Soft Cell's 1981 album Non-Stop Erotic Cabaret includes the track "Chips On My Shoulder", the lyrics of which feature the narrator lamenting his own entitlement and hypocrisy – "Misery, complaints, self-pity, injustice / Chips on my shoulder".In 2003, 50 Cent released a song, ""Many Men (Wish Death)", using this phrase.In 2006, Beyonce released a song, "Upgrade U", with the line "This ain't no shoulder with a chip, or an ego".The 2007 musical Legally Blonde has a song titled "Chip on My Shoulder". In this, after being accused of having a chip on his shoulder, Emmett Forrest explains to Elle Woods that the need to prove himself motivates him.In 2014, Mac DeMarco released the song "Salad Days", on his album of the same title, using this phrase.In 2017, Taylor Swift, Ed Sheeran, and Future released a song, "End Game", with the line "I got issues and chips on both of my shoulders".In 2017, Kendrick Lamar released a song, "FEEL.", off his album DAMN., using this phrase.In 2018, Calpurnia released a single, Louie, using the phrase.In 2020, Justin Bieber and Shawn Mendes released a song, "Monster", with the line "I had a chip on my shoulder, had to let it go".The Hives in the song "Howlin’ Pelle Talks to the Kids" talk-sings "I got a chip on my shoulder, the size of a boulder, and I might get it off".Warren G's song "What's Next" (featuring Mr. Malik) uses this phrase. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Susceptibility and severity of infections in pregnancy**
Susceptibility and severity of infections in pregnancy:
In pregnancy, there is an increased susceptibility and/or severity of several infectious diseases.
General determinants:
There are several potential risk factors or causes to this increased risk: An increased immune tolerance in pregnancy to prevent an immune reaction against the fetus.
Maternal physiological changes including a decrease in respiratory volumes and urinary stasis due to an enlarging uterus.
The presence of a placenta for pathogens to use as a habitat, such as by L. Monocytogenes and P. falciparum.
Examples:
Pregnant women are more severely affected by influenza, hepatitis E, herpes simplex and malaria. The evidence is more limited for coccidioidomycosis, measles, smallpox, and varicella. Pregnancy may also increase susceptibility for toxoplasmosis.
Examples:
During the 2009 H1N1 pandemic, as well as during interpandemic periods, women in the third trimester of pregnancy were at increased risk for severe disease, such as disease requiring admission to an intensive care unit or resulting in death, as compared with women in an earlier stage of pregnancy.For hepatitis E, the case fatality rate among pregnant women has been estimated to be between 15% and 25%, as compared with a range of 0.5 to 4% in the population overall, with the highest susceptibility in the third trimester.Primary herpes simplex infection, when occurring in pregnant women, has an increased risk of dissemination and hepatitis, an otherwise rare complication in immunocompetent adults, particularly during the third trimester. Also, recurrences of herpes genitalis increase in frequency during pregnancy.The risk of severe malaria by Plasmodium falciparum is three times as high in pregnant women, with a median maternal mortality of 40% reported in studies in the Asia–Pacific region. In women where the pregnancy is not the first, malaria infection is more often asymptomatic, even at high parasite loads, compared to women having their first pregnancy. There is a decreasing susceptibility to malaria with increasing parity, probably due to immunity to pregnancy-specific antigens. Young maternal age and increases the risk. Studies differ whether the risk is different in different trimesters. Limited data suggest that malaria caused by Plasmodium vivax is also more severe during pregnancy.Severe and disseminated coccidioidomycosis has been reported to occur in increased frequency in pregnant women in several reports and case series, but subsequent large surveys, with the overall risk being rather low.Varicella occurs at an increased rate during pregnancy, but mortality is not higher than that among men and non-pregnant women.Listeriosis mostly occurs during the third trimester, with Hispanic women appearing to be at particular risk. Listeriosis is a vertically transmitted infection that may cause miscarriage, stillbirth, preterm birth, or serious neonatal disease.Some infections are vertically transmissible, meaning that they can affect the embryo, fetus, or baby. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Big Five personality traits**
Big Five personality traits:
The Big Five personality traits is a suggested taxonomy, or grouping, for personality traits, developed from the 1980s onward in psychological trait theory.
Big Five personality traits:
Starting in the 1990s, the theory identified five factors by labels, for the US English population, typically referred to as: openness to experience (inventive/curious vs. consistent/cautious) conscientiousness (efficient/organized vs. extravagant/careless) extraversion (outgoing/energetic vs. solitary/reserved) agreeableness (friendly/compassionate vs. critical/rational) neuroticism (sensitive/nervous vs. resilient/confident)When factor analysis is applied to personality survey data, it reveals semantic associations: some words used to describe aspects of personality are often applied to the same person. For example, someone described as conscientious is more likely to be described as "always prepared" rather than "messy". These associations suggest five broad dimensions used in common language to describe the human personality, temperament, and psyche.Those labels for the five factors may be remembered using the acronyms "OCEAN" or "CANOE". Beneath each proposed global factor, there are a number of correlated and more specific primary factors. For example, extraversion is typically associated with qualities such as gregariousness, assertiveness, excitement-seeking, warmth, activity, and positive emotions. These traits are not black and white, but rather placed on continua.
Development:
The Big Five model was built to understand the relationship between personality and academic behaviour. It was defined by several independent sets of researchers who analysed words describing people's behaviour. These researchers first studied relationships between a large number of words related to personality traits. They made lists of these words shorter by 5–10 times and then used factor analysis to group the remaining traits (with data mostly based upon people's estimations, in self-report questionnaires and peer ratings) in order to find the basic factors of personality.The initial model was advanced by Ernest Tupes and Raymond Christal in 1958, but failed to reach scholars and scientists until the 1980s. In 1990, J.M. Digman advanced his five-factor model of personality, which Lewis Goldberg put at the highest-organised level. These five overarching domains have been found to contain most known personality traits and are assumed to represent the basic structure behind them all.At least four sets of researchers have worked independently to reflect personality traits in language for decades on this problem and have mainly identified the same five factors: Tupes and Christal were first, followed by Goldberg at the Oregon Research Institute, Cattell at the University of Illinois, and finally Costa and McCrae. These four sets of researchers used somewhat different methods in finding the five traits, making the sets of five factors have varying names and meanings. However, all have been found to be strongly correlated with their corresponding factors. Studies indicate that the Big Five traits are not nearly as powerful in predicting and explaining actual behaviour as the more numerous facets or primary traits.Each of the Big Five personality traits contains two separate, but correlated, aspects reflecting a level of personality below the broad domains but above the many facet scales also making up part of the Big Five. The aspects are labelled as follows: Volatility and Withdrawal for Neuroticism; Enthusiasm and Assertiveness for Extraversion; Intellect and Openness for Openness to Experience; Industriousness and Orderliness for Conscientiousness; and Compassion and Politeness for Agreeableness. People who do not clearly fit in a single factor in each dimension above are considered adaptable, moderate and reasonable but unprincipled, inscrutable and calculating.
Descriptions of the particular personality traits:
Openness to experience Openness to experience is a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, open to emotion, sensitive to beauty, and willing to try new things. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are also more likely to hold unconventional beliefs. Open people can be perceived as unpredictable or lacking focus, and more likely to engage in risky behaviour or drug-taking. Moreover, individuals with high openness are said to pursue self-actualisation specifically by seeking out intense, euphoric experiences. Conversely, those with low openness want to be fulfilled by persevering and are characterised as pragmatic and data-driven – sometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualise the openness factor as there is a lack of biological support for this particular trait. Openness has not shown a significant association with any brain regions as opposed to the other four traits which did when using brain imaging to detect changes in volume associated with each trait.
Descriptions of the particular personality traits:
Sample items I have a rich vocabulary.
I have a vivid imagination.
I have excellent ideas.
I am quick to understand things.
I use difficult words.
I spend time reflecting on things.
I am full of ideas.
Descriptions of the particular personality traits:
I have difficulty understanding abstract ideas. (Reversed) I am not interested in abstract ideas. (Reversed) I do not have a good imagination. (Reversed) Conscientiousness Conscientiousness is a tendency to be self-disciplined, act dutifully, and strive for achievement against measures or outside expectations. It is related to people's level of impulse control, regulation, and direction. High conscientiousness is often perceived as being stubborn and focused. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability. High conscientiousness indicates a preference for planned rather than spontaneous behaviour. The average level of conscientiousness rises among young adults and then declines among older adults.
Descriptions of the particular personality traits:
Sample items I am always prepared.
I pay attention to details.
I get chores done right away.
I follow a schedule.
I am exacting in my work.
Descriptions of the particular personality traits:
I do not like order. (Reversed) I leave my belongings around. (Reversed) I make a mess of things. (Reversed) I often forget to put things back in their proper place. (Reversed) I shirk my duties. (Reversed) Extraversion Extraversion is characterised by breadth of activities (as opposed to depth), surgency from external activities/situations, and energy creation from external means. The trait is marked by pronounced engagement with the external world. Extraverts enjoy interacting with people, and are often perceived as energetic. They tend to be enthusiastic and action-oriented. They possess high group visibility, like to talk, and assert themselves. Extraverts may appear more dominant in social settings, as opposed to introverts in that setting.Introverts have lower social engagement and energy levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression; but as greater independence of their social world than extraverts. Introverts need less stimulation, and more time alone than extraverts. This does not mean that they are unfriendly or antisocial; rather, they are aloof and reserved in social situations.Generally, people are a combination of extraversion and introversion, with personality psychologist Hans Eysenck suggesting a model by which differences in their brains produce these traits.: 106 Sample items I am the life of the party.
Descriptions of the particular personality traits:
I feel comfortable around people.
I start conversations.
I talk to a lot of different people at parties.
I do not mind being the center of attention.
Descriptions of the particular personality traits:
I do not talk a lot. (Reversed) I keep in the background. (Reversed) I have little to say. (Reversed) I do not like to draw attention to myself. (Reversed) I am quiet around strangers. (Reversed) Agreeableness Agreeableness is the general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, kind, generous, trusting and trustworthy, helpful, and willing to compromise their interests with others. Agreeable people also have an optimistic view of human nature.
Descriptions of the particular personality traits:
Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others' well-being and are less likely to extend themselves for other people. Sometimes their skepticism about others' motives causes them to be suspicious, unfriendly, and uncooperative. Disagreeable people are often competitive or challenging, which can be seen as argumentative or untrustworthy.Because agreeableness is a social trait, research has shown that one's agreeableness positively correlates with the quality of relationships with one's team members. Agreeableness also positively predicts transformational leadership skills. In a study conducted among 169 participants in leadership positions in a variety of professions, individuals were asked to take a personality test and be directly evaluated by supervised subordinates. Very agreeable leaders were more likely to be considered transformational rather than transactional. Although the relationship was not strong (r=0.32, β=0.28, p<0.01), it was the strongest of the Big Five traits. However, the same study could not predict leadership effectiveness as evaluated by the leader's direct supervisor.Conversely, agreeableness has been found to be negatively related to transactional leadership in the military. A study of Asian military units showed that agreeable people are more likely to be poor transactional leaders. Therefore, with further research, organisations may be able to determine an individual's potential for performance based on their personality traits. For instance, in their journal article "Which Personality Attributes Are Most Important in the Workplace?" Paul Sackett and Philip Walmsley claim that conscientiousness and agreeableness are "important to success across many different jobs." Sample items I am interested in people.
Descriptions of the particular personality traits:
I sympathise with others' feelings.
I have a soft heart.
I take time out for others.
I feel others' emotions.
I make people feel at ease.
Descriptions of the particular personality traits:
I am not really interested in others. (Reversed) I insult people. (Reversed) I am not interested in other people's problems. (Reversed) I feel little concern for others. (Reversed) Neuroticism Neuroticism is the tendency to have strong negative emotions, such as anger, anxiety, or depression. It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According to Hans Eysenck's (1967) theory of personality, neuroticism is associated with low tolerance for stress or strongly disliked changes. Neuroticism is a classic temperament trait that has been studied in temperament research for decades, even before it was adapted by the Five Factor Model.
Descriptions of the particular personality traits:
Neurotic people are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening. They can perceive minor frustrations as hopelessly difficult. Their negative emotional reactions tend to stay for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to pessimism toward work, to certainty that work hinders personal relationships, and to higher levels of anxiety from the pressures at work. Furthermore, neurotic people may display more skin-conductance reactivity than calm and composed people. These problems in emotional regulation can make a neurotic person think less clearly, make worse decisions, and cope less effectively with stress. Being disappointed with one's life achievements can make one more neurotic and increase one's chances of falling into clinical depression. Moreover, neurotic individuals tend to experience more negative life events, but neuroticism also changes in response to positive and negative life experiences. Also, neurotic people tend to have worse psychological well-being.At the other end of the scale, less neurotic individuals are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings; that is related to extraversion instead.Neuroticism is similar but not identical to being neurotic in the Freudian sense (i.e., neurosis). Some psychologists prefer to call neuroticism by the term emotional instability to differentiate it from the term neurotic in a career test.
Descriptions of the particular personality traits:
Sample items I get stressed out easily.
I worry about things.
I am easily disturbed.
I get upset easily.
I change my mood a lot.
I have frequent mood swings.
I get irritated easily.
I often feel blue.
I am relaxed most of the time. (Reversed) I seldom feel blue. (Reversed)
History:
Finding the five factors In 1884, British scientist Sir Francis Galton became the first person known to consider deriving a comprehensive taxonomy of human personality traits by sampling language. The idea that this may be possible is known as the lexical hypothesis.
History:
In 1936, American psychologists Gordon Allport of Harvard University and Henry Odbert of Dartmouth College implemented Galton's hypothesis. They organised for three anonymous people to categorise adjectives from Webster's New International Dictionary and a list of common slang words. The result was a list of 4504 adjectives they believed were descriptive of observable and relatively permanent traits.In 1943, the British-American Raymond Cattell of Harvard University took Allport and Odbert's list and reduced this to a list of "160 odd" terms by eliminating words with very similar meanings. To these, he added terms from 22 other psychological categories, and additional "interest" and "abilities" terms. This resulted in a list of 171 traits. From this he used factor analysis to derive 60 "personality clusters or syndromes", plus an additional 7 minor clusters.Cattell then narrowed this down to 35 terms, and later added a 36th factor in the form of an IQ measure. Through factor analysis in 1945, 1947, and 1948, he created 11 or 12 factor solutions. The 1947 study surveyed university students, which Cattell deemed to have a broad range of personalities due to the cohort including many recently returned war veterans.Also in 1947, German-British psychologist Hans Eysenck of University College London published his book Dimensions of Personality. He posited that the two most important personality dimensions were "Extraversion" and "Neuroticism" (a term he himself coined).In July 1949, American Donald Fiske of the University of Chicago used 22 terms either taken or adapted from Cattell's 1947 study, and through surveys of male university students and statistics derived five factors: "Social Adaptability", "Emotional Control", "Conformity", "Inquiring Intellect", and "Confident Self-expression."Also in 1949, Cattell found 4 additional factors, which he believed consisted of information that could only be provided through self-rating. With this understanding, he created the sixteen factor 16PF Questionnaire.In 1953, American John W French of Educational Testing Service published an extensive meta-analysis of personality trait factor studies.In 1957, American Ernest Tupes of the United States Air Force undertook a personality trait study of US Air Force officers. Each was rated by their peers using Cattell's 35 terms (or in some cases, the 30 most reliable terms).In 1958, Tupes and fellow American Raymond Christal began a US Air Force study by taking 37 personality factors and other data found in Cattell's 1947 paper, Fiske's 1949 paper, and Tupes' 1957 paper. All but one of the factors chosen were in Cattell's paper, and that one was from Fiske. Through statistical analysis, they derived five factors they labeled "Surgency", "Agreeableness", "Dependability", "Emotional Stability", and "Culture". In addition to the influence of Cattell and Fiske's work, they strongly noted the influence of French's 1953 study.Tupes and Christal further tested and explained their 1958 work in a 1961 paper.American Warren Norman of the University of Michigan replicated Tupes and Christal's work in 1963. He relabeled "Surgency" as "Extroversion or Surgency", and "Dependability" as "Conscientiousness". He also found four subordinate scales for each factor. Norman's paper was much more read than Tupes and Christal's papers had been. (Norman's later Oregon Research Institute colleague Lewis Goldberg continued this work.)In the 4th edition of the 16PF Questionnaire released in 1968, 5 "global factors" derived from the 16 factors were identified: "Extraversion", "Independence", "Anxiety", "Self-control" and "Tough-mindedness". 16PF advocates have since called these "the original Big 5".
History:
Hiatus in research During the 1970s, the changing zeitgeist made publication of personality research difficult. In his 1968 book Personality and Assessment, Walter Mischel asserted that personality instruments could not predict behavior with a correlation of more than 0.3. Social psychologists like Mischel argued that attitudes and behavior were not stable, but varied with the situation. Predicting behavior from personality instruments was claimed to be impossible.
History:
Renewed attention In 1978, Americans Paul Costa and Robert McCrae of the National Institutes of Health published a book chapter describing their Neuroticism-Extroversion-Openness (NEO) model. The model was based on the three factors in its name. They used Eysenck's concept of "Extroversion" rather than Carl Jung's. Each factor had six facets. The authors expanded their explanation of the model in subsequent papers.
History:
Also in 1978, British psychologist Peter Saville of Brunel University applied statistical analysis to 16PF results, and determined that the model could be reduced to five factors, "Anxiety", "Extraversion", "Warmth", "Imagination" and "Conscientiousness."At a 1980 symposium in Honolulu, Lewis Goldberg, Naomi Takemoto-Chock, Andrew Comrey, and John M. Digman, reviewed the available personality instruments of the day.In 1981, Digman and Takemoto-Chock of the University of Hawaii reanalysed data from Cattell, Tupes, Norman, Fiske and Digman. They re-affirmed the validity of the five factors, naming them "Friendly Compliance vs. Hostile Non-compliance", "Extraversion vs. Introversion", "Ego Strength vs. Emotional Disorganization", "Will to Achieve" and "Intellect". They also found weak evidence for the existence of a sixth factor, "Culture".A 1983 paper by GJ Boyle demonstrated that the predictions of personality models correlated better with real-life behavior under stressful emotional conditions, as opposed to typical survey administration under neutral emotional conditions.Peter Saville and his team included the five-factor "Pentagon" model as part of the Occupational Personality Questionnaires (OPQ) in 1984. This was the first commercially available Big Five test. Its factors are "Extroversion", "Vigorous", "Methodical", "Emotional Stability", and "Abstract".This was closely followed by another commercial test, the NEO PI three-factor personality inventory, published by Costa and McCrae in 1985. It used the three NEO factors. (The methodology employed in constructing the NEO instruments has been subject to critical scrutiny.: 431–33 ) Emerging methodologies increasingly confirmed personality theories during the 1980s. Though generally failing to predict single instances of behavior, researchers found that they could predict patterns of behavior by aggregating large numbers of observations. As a result, correlations between personality and behavior increased substantially, and it became clear that "personality" did in fact exist.In a 1990 paper, Goldberg replicated Norman's results using a subset of Norman's term list.In 1992, the NEO PI evolved into the NEO PI-R, adding the factors "Agreeableness" and "Conscientiousness," and becoming a Big Five instrument. This set the names for the factors that are now most commonly used. The NEO maintainers call their model the "Five Factor Model" (FFM). Each NEO personality dimension has six subordinate facets.
History:
Nederlander Wim Hofstee at the University of Groningen used a lexical hypothesis approach with the Nederlands language to develop what became the International Personality Item Pool in the 1990s. Further development in Germany and the United States (involving Lewis Goldberg) saw the pool based on three languages. Its questions and results have been mapped to various Big Five personality typing models.Canadians Kibeom Lee and Michael Ashton released a book describing their HEXACO model in 2004. It adds a sixth factor, "Honesty-Humility" to the five (which it calls "Emotionality", "Extraversion", "Agreeableness", "Conscientiousness", and "Openness to Experience"). Each of these factors has four facets.
History:
In 2007, Colin DeYoung (Yale), Lena C. Quilty (CAMH) and Jordan Peterson (Toronto) concluded that the 10 aspects of the Big Five may have distinct biological substrates. This was derived through factor analyses of two data samples with the International Personality Item Pool, followed by cross-correlation with scores derived from 10 genetic factors identified as underlying the shared variance among the Revised NEO Personality Inventory facets.By 2009, personality and social psychologists generally agreed that both personal and situational variables are needed to account for human behavior.Colin G. DeYoung et al. (2016) researched the Big Five model and how the five broad factors are compatible with the 25 scales of the DSM-5's Personality Inventory (PID-5). DeYoung et al. considers the PID-5 to measure facet-level traits. Because the Big Five factors are broader than the 25 scales of the PID-5, there is disagreement in personality psychology relating to the number of factors within the Big Five. According to DeYoung et al., "the number of valid facets might be limited only by the number of traits that can be shown to have discriminant validity."A FFM-associated test was used by Cambridge Analytica, and was part of the "psychographic profiling" controversy during the 2016 US presidential election.
Biological and developmental factors:
The factors that influence a personality are called the determinants of personality. These factors determine the traits which a person develops in the course of development from a child.
Biological and developmental factors:
Temperament and personality There are debates between temperament researchers and personality researchers as to whether or not biologically based differences define a concept of temperament or a part of personality. The presence of such differences in pre-cultural individuals (such as animals or young infants) suggests that they belong to temperament since personality is a socio-cultural concept. For this reason developmental psychologists generally interpret individual differences in children as an expression of temperament rather than personality. Some researchers argue that temperaments and personality traits are age-specific demonstrations of virtually the same internal qualities. Some believe that early childhood temperaments may become adolescent and adult personality traits as individuals' basic genetic characteristics interact with their changing environments to various degrees.Researchers of adult temperament point out that, similarly to sex, age, and mental illness, temperament is based on biochemical systems whereas personality is a product of socialisation of an individual possessing these four types of features. Temperament interacts with socio-cultural factors, but, similar to sex and age, still cannot be controlled or easily changed by these factors.
Biological and developmental factors:
Therefore, it is suggested that temperament (neurochemically-based individual differences) should be kept as an independent concept for further studies and not be confused with personality (culturally-based individual differences, reflected in the origin of the word "persona" (Lat) as a "social mask").
Biological and developmental factors:
Moreover, temperament refers to dynamic features of behaviour (energetic, tempo, sensitivity, and emotionality-related), whereas personality is to be considered a psycho-social construct comprising the content characteristics of human behaviour (such as values, attitudes, habits, preferences, personal history, self-image). Temperament researchers point out that the lack of attention to surviving temperament research by the creators of the Big Five model led to an overlap between its dimensions and dimensions described in multiple temperament models much earlier. For example, neuroticism reflects the traditional temperament dimension of emotionality studied by Jerome Kagan's group since the '60s. Extraversion was also first introduced as a temperament type by Jung from the '20s.
Biological and developmental factors:
Heritability A 1996 behavioural genetics study of twins suggested that heritability and environmental factors both influence all five factors to the same degree. Among four twin studies examined in 2003, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.
Biological and developmental factors:
Non-humans The Big Five personality traits have been assessed in some non-human species but methodology is debatable. In one series of studies, human ratings of chimpanzees using the Hominoid Personality Questionnaire, revealed factors of extraversion, conscientiousness and agreeableness– as well as an additional factor of dominance–across hundreds of chimpanzees in zoological parks, a large naturalistic sanctuary, and a research laboratory. Neuroticism and openness factors were found in an original zoo sample, but were not replicated in a new zoo sample or in other settings (perhaps reflecting the design of the CPQ). A study review found that markers for the three dimensions extraversion, neuroticism, and agreeableness were found most consistently across different species, followed by openness; only chimpanzees showed markers for conscientious behavior.A study completed in 2020 concluded that dolphins have some similar personality traits to humans. Both are large brained intelligent animals but have evolved separately for millions of years.
Biological and developmental factors:
Development during childhood and adolescence Research on the Big Five, and personality in general, has focused primarily on individual differences in adulthood, rather than in childhood and adolescence, and often include temperament traits. Recently, there has been growing recognition of the need to study child and adolescent personality trait development in order to understand how traits develop and change throughout the lifespan.Recent studies have begun to explore the developmental origins and trajectories of the Big Five among children and adolescents, especially those that relate to temperament. Many researchers have sought to distinguish between personality and temperament. Temperament often refers to early behavioral and affective characteristics that are thought to be driven primarily by genes. Models of temperament often include four trait dimensions: surgency/sociability, negative emotionality, persistence/effortful control, and activity level. Some of these differences in temperament are evident at, if not before, birth. For example, both parents and researchers recognize that some newborn infants are peaceful and easily soothed while others are comparatively fussy and hard to calm. Unlike temperament, however, many researchers view the development of personality as gradually occurring throughout childhood. Contrary to some researchers who question whether children have stable personality traits, Big Five or otherwise, most researchers contend that there are significant psychological differences between children that are associated with relatively stable, distinct, and salient behavior patterns.The structure, manifestations, and development of the Big Five in childhood and adolescence have been studied using a variety of methods, including parent- and teacher-ratings, preadolescent and adolescent self- and peer-ratings, and observations of parent-child interactions. Results from these studies support the relative stability of personality traits across the human lifespan, at least from preschool age through adulthood. More specifically, research suggests that four of the Big Five – namely Extraversion, Neuroticism, Conscientiousness, and Agreeableness – reliably describe personality differences in childhood, adolescence, and adulthood. However, some evidence suggests that Openness may not be a fundamental, stable part of childhood personality. Although some researchers have found that Openness in children and adolescents relates to attributes such as creativity, curiosity, imagination, and intellect, many researchers have failed to find distinct individual differences in Openness in childhood and early adolescence. Potentially, Openness may (a) manifest in unique, currently unknown ways in childhood or (b) may only manifest as children develop socially and cognitively. Other studies have found evidence for all of the Big Five traits in childhood and adolescence as well as two other child-specific traits: Irritability and Activity. Despite these specific differences, the majority of findings suggest that personality traits – particularly Extraversion, Neuroticism, Conscientiousness, and Agreeableness – are evident in childhood and adolescence and are associated with distinct social-emotional patterns of behavior that are largely consistent with adult manifestations of those same personality traits. Some researchers have proposed the youth personality trait is best described by six trait dimensions: neuroticism, extraversion, openness to experience, agreeableness, conscientiousness, and activity. Despite some preliminary evidence for this "Little Six" model, research in this area has been delayed by a lack of available measures.
Biological and developmental factors:
Previous research has found evidence that most adults become more agreeable, conscientious, and less neurotic as they age. This has been referred to as the maturation effect. Many researchers have sought to investigate how trends in adult personality development compare to trends in youth personality development. Two main population-level indices have been important in this area of research: rank-order consistency and mean-level consistency. Rank-order consistency indicates the relative placement of individuals within a group. Mean-level consistency indicates whether groups increase or decrease on certain traits throughout the lifetime.Findings from these studies indicate that, consistent with adult personality trends, youth personality becomes increasingly more stable in terms of rank-order throughout childhood. Unlike adult personality research, which indicates that people become agreeable, conscientious, and emotionally stable with age, some findings in youth personality research have indicated that mean levels of agreeableness, conscientiousness, and openness to experience decline from late childhood to late adolescence. The disruption hypothesis, which proposes that biological, social, and psychological changes experienced during youth result in temporary dips in maturity, has been proposed to explain these findings.
Biological and developmental factors:
Extraversion/positive emotionality In Big Five studies, extraversion has been associated with surgency. Children with high Extraversion are energetic, talkative, social, and dominant with children and adults; whereas, children with low Extraversion tend to be quiet, calm, inhibited, and submissive to other children and adults. Individual differences in Extraversion first manifest in infancy as varying levels of positive emotionality. These differences in turn predict social and physical activity during later childhood and may represent, or be associated with, the behavioral activation system. In children, Extraversion/Positive Emotionality includes four sub-traits: three traits that are similar to the previously described traits of temperament – activity, sociability, shyness, and the trait of dominance.
Biological and developmental factors:
Activity: Similarly to findings in temperament research, children with high activity tend to have high energy levels and more intense and frequent motor activity compared to their peers. Salient differences in activity reliably manifest in infancy, persist through adolescence, and fade as motor activity decreases in adulthood or potentially develops into talkativeness.
Dominance: Children with high dominance tend to influence the behavior of others, particularly their peers, to obtain desirable rewards or outcomes. Such children are generally skilled at organizing activities and games and deceiving others by controlling their nonverbal behavior.
Biological and developmental factors:
Shyness: Children with high shyness are generally socially withdrawn, nervous, and inhibited around strangers. In time, such children may become fearful even around "known others", especially if their peers reject them. Similar pattern was described in temperament longitudinal studies of shyness Sociability: Children with high sociability generally prefer to be with others rather than alone. During middle childhood, the distinction between low sociability and high shyness becomes more pronounced, particularly as children gain greater control over how and where they spend their time.
Biological and developmental factors:
Development throughout adulthood Many studies of longitudinal data, which correlate people's test scores over time, and cross-sectional data, which compare personality levels across different age groups, show a high degree of stability in personality traits during adulthood, especially Neuroticism that is often regarded as a temperament trait similarly to longitudinal research in temperament for the same traits. It is shown that the personality stabilizes for working-age individuals within about four years after starting working. There is also little evidence that adverse life events can have any significant impact on the personality of individuals. More recent research and meta-analyses of previous studies, however, indicate that change occurs in all five traits at various points in the lifespan. The new research shows evidence for a maturation effect. On average, levels of agreeableness and conscientiousness typically increase with time, whereas extraversion, neuroticism, and openness tend to decrease. Research has also demonstrated that changes in Big Five personality traits depend on the individual's current stage of development. For example, levels of agreeableness and conscientiousness demonstrate a negative trend during childhood and early adolescence before trending upwards during late adolescence and into adulthood. In addition to these group effects, there are individual differences: different people demonstrate unique patterns of change at all stages of life.In addition, some research (Fleeson, 2001) suggests that the Big Five should not be conceived of as dichotomies (such as extraversion vs. introversion) but as continua. Each individual has the capacity to move along each dimension as circumstances (social or temporal) change. He is or she is therefore not simply on one end of each trait dichotomy but is a blend of both, exhibiting some characteristics more often than others:Research regarding personality with growing age has suggested that as individuals enter their elder years (79–86), those with lower IQ see a raise in extraversion, but a decline in conscientiousness and physical well-being.
Group differences:
Gender differences Some cross-cultural research has shown some patterns of gender differences on responses to the NEO-PI-R and the Big Five Inventory. For example, women consistently report higher Neuroticism, Agreeableness, warmth (an extraversion facet) and openness to feelings, and men often report higher assertiveness (a facet of extraversion) and openness to ideas as assessed by the NEO-PI-R.A study of gender differences in 55 nations using the Big Five Inventory found that women tended to be somewhat higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. The difference in neuroticism was the most prominent and consistent, with significant differences found in 49 of the 55 nations surveyed.Gender differences in personality traits are largest in prosperous, healthy, and more gender-egalitarian nations. The explanation for this, as stated by the researchers of a 2001 paper, is that actions by women in individualistic, egalitarian countries are more likely to be attributed to their personality, rather than being attributed to ascribed gender roles within collectivist, traditional countries.Measured differences in the magnitude of sex differences between more or less developed world regions were caused by the changes in the measured personalities of men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, less extraverted, less conscientious and less agreeable compared to men in less developed world regions. Women, on the other hand tended not to differ in personality traits across regions.The authors of this 2008 study speculated that resource-poor environments (that is, countries with low levels of development) may inhibit the development of gender differences, whereas resource-rich environments facilitate them. This may be because males require more resources than females in order to reach their full personality potential of less conscientious, less agreeable, less neurotic, and less extraverted. The authors also speculated in their discussion that due to different evolutionary pressures, men may have evolved to be more risk taking and socially dominant, whereas women evolved to be more cautious and nurturing. The authors further posited that ancient hunter-gatherer societies may have been more egalitarian than later agriculturally oriented societies. Hence, the development of gender inequalities may have acted to constrain the development of gender differences in personality that originally evolved in hunter-gatherer societies. As modern societies have become more egalitarian, again, it may be that innate sex differences are no longer constrained and hence manifest more fully than in less-wealthy cultures. This is one interpretation of the results among other possible interpretations.
Group differences:
Birth-order differences Frank Sulloway argues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to siblings that were born later. Large-scale studies using random samples and self-report personality tests, however, have found milder effects than Sulloway claimed, or no significant effects of birth order on personality. A study using the Project Talent data, which is a large-scale representative survey of American high school students, with 272,003 eligible participants, found statistically significant but very small effects (the average absolute correlation between birth order and personality was .02) of birth order on personality, such that firstborns were slightly more conscientious, dominant, and agreeable, while also being less neurotic and less sociable. Parental socioeconomic status and participant gender had much larger correlations with personality.
Group differences:
In 2002, the Journal of Psychology posted a Big Five Personality Trait Difference; where researchers explored the relationship between the five-factor model and the Universal-Diverse Orientation (UDO) in counselor trainees. (Thompson, R., Brossart, D., and Mivielle, A., 2002). UDO is known as one social attitude that produces a strong awareness and/or acceptance towards the similarities and differences among individuals. (Miville, M., Romas, J., Johnson, J., and Lon, R. 2002) The study found that the counselor trainees that are more open to the idea of creative expression (a facet of Openness to Experience, Openness to Aesthetics) among individuals are more likely to work with a diverse group of clients, and feel comfortable in their role.
Cultural differences:
The Big Five have been pursued in a variety of languages and cultures, such as German, Chinese, and Indian. For example, Thompson has claimed to find the Big Five structure across several cultures using an international English language scale.
Cultural differences:
Cheung, van de Vijver, and Leong (2011) suggest, however, that the Openness factor is particularly unsupported in Asian countries and that a different fifth factor is identified.Recent work has found relationships between Geert Hofstede's cultural factors, Individualism, Power Distance, Masculinity, and Uncertainty Avoidance, with the average Big Five scores in a country. For instance, the degree to which a country values individualism correlates with its average extraversion, whereas people living in cultures which are accepting of large inequalities in their power structures tend to score somewhat higher on conscientiousness.Personality differences around the world might even have contributed to the emergence of different political systems. A recent study has found that countries' average personality trait levels are correlated with their political systems: countries with higher average trait Openness tended to have more democratic institutions, an association that held even after factoring out other relevant influences such as economic development.Attempts to replicate the Big Five in other countries with local dictionaries have succeeded in some countries but not in others. Apparently, for instance, Hungarians do not appear to have a single agreeableness factor. Other researchers have found evidence for agreeableness but not for other factors. It is important to recognize that individual differences in traits are relevant in a specific cultural context, and that the traits do not have their effects outside of that context.: 189
Health:
Personality and dementia Some diseases cause changes in personality. For example, although gradual memory impairment is the hallmark feature of Alzheimer's disease, a systematic review of personality changes in Alzheimer's disease by Robins Wahlin and Byrne, published in 2011, found systematic and consistent trait changes mapped to the Big Five. The largest change observed was a decrease in conscientiousness. The next most significant changes were an increase in Neuroticism and decrease in Extraversion, but Openness and Agreeableness were also decreased. These changes in personality could assist with early diagnosis.A study published in 2023 found that the Big Five personality traits may also influence the quality of life experienced by people with Alzheimer’s disease and other dementias, post diagnosis. In this study people with dementia with lower levels of Neuroticism self-reported higher quality of life than those with higher levels of Neuroticism while those with higher levels of the other four traits self-reported higher quality of life than those with lower levels of these traits. This suggests that as well as assisting with early diagnosis, the Big Five personality traits could help identify people with dementia potentially more vulnerable to adverse outcomes and inform personalized care planning and interventions.
Health:
Personality disorders As of 2002, there were over fifty published studies relating the FFM to personality disorders. Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains.In her review of the personality disorder literature published in 2007, Lee Anna Clark asserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits". However, other researchers disagree that this model is widely accepted (see the section Critique below) and suggest that it simply replicates early temperament research. Noticeably, FFM publications never compare their findings to temperament models even though temperament and mental disorders (especially personality disorders) are thought to be based on the same neurotransmitter imbalances, just to varying degrees.The five-factor model was claimed to significantly predict all ten personality disorder symptoms and outperform the Minnesota Multiphasic Personality Inventory (MMPI) in the prediction of borderline, avoidant, and dependent personality disorder symptoms. However, most predictions related to an increase in Neuroticism and a decrease in Agreeableness, and therefore did not differentiate between the disorders very well.
Health:
Common mental disorders Converging evidence from several nationally representative studies has established three classes of mental disorders which are especially common in the general population: Depressive disorders (e.g., major depressive disorder (MDD), dysthymic disorder), anxiety disorders (e.g., generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), panic disorder, agoraphobia, specific phobia, and social phobia), and substance use disorders (SUDs). The Five Factor personality profiles of users of different drugs may be different. For example, the typical profile for heroin users is N⇑,O⇑,A⇓,C⇓ , whereas for ecstasy users the high level of N is not expected but E is higher: E⇑,O⇑,A⇓,C⇓ .These common mental disorders (CMDs) have been empirically linked to the Big Five personality traits, neuroticism in particular. Numerous studies have found that having high scores of neuroticism significantly increases one's risk for developing a common mental disorder. A large-scale meta-analysis (n > 75,000) examining the relationship between all of the Big Five personality traits and common mental disorders found that low conscientiousness yielded consistently strong effects for each common mental disorder examined (i.e., MDD, dysthymic disorder, GAD, PTSD, panic disorder, agoraphobia, social phobia, specific phobia, and SUD). This finding parallels research on physical health, which has established that conscientiousness is the strongest personality predictor of reduced mortality, and is highly negatively correlated with making poor health choices. In regards to the other personality domains, the meta-analysis found that all common mental disorders examined were defined by high neuroticism, most exhibited low extraversion, only SUD was linked to agreeableness (negatively), and no disorders were associated with Openness. A meta-analysis of 59 longitudinal studies showed that high neuroticism predicted the development of anxiety, depression, substance abuse, psychosis, schizophrenia, and non-specific mental distress, also after adjustment for baseline symptoms and psychiatric history.
Health:
The personality-psychopathology models Five major models have been posed to explain the nature of the relationship between personality and mental illness. There is currently no single "best model", as each of them has received at least some empirical support. It is also important to note that these models are not mutually exclusive – more than one may be operating for a particular individual and various mental disorders may be explained by different models.
Health:
The Vulnerability/Risk Model: According to this model, personality contributes to the onset or etiology of various common mental disorders. In other words, pre-existing personality traits either cause the development of CMDs directly or enhance the impact of causal risk factors. There is strong support for neuroticism being a robust vulnerability factor.
The Pathoplasty Model: This model proposes that premorbid personality traits impact the expression, course, severity, and/or treatment response of a mental disorder. An example of this relationship would be a heightened likelihood of committing suicide in a depressed individual who also has low levels of constraint.
The Common Cause Model: According to the common cause model, personality traits are predictive of CMDs because personality and psychopathology have shared genetic and environmental determinants which result in non-causal associations between the two constructs.
Health:
The Spectrum Model: This model proposes that associations between personality and psychopathology are found because these two constructs both occupy a single domain or spectrum and psychopathology is simply a display of the extremes of normal personality function. Support for this model is provided by an issue of criterion overlap. For instance, two of the primary facet scales of neuroticism in the NEO-PI-R are "depression" and "anxiety". Thus the fact that diagnostic criteria for depression, anxiety, and neuroticism assess the same content increases the correlations between these domains.
Health:
The Scar Model: According to the scar model, episodes of a mental disorder 'scar' an individual's personality, changing it in significant ways from premorbid functioning. An example of a scar effect would be a decrease in openness to experience following an episode of PTSD.
Health:
Physical health To examine how the Big Five personality traits are related to subjective health outcomes (positive and negative mood, physical symptoms, and general health concern) and objective health conditions (chronic illness, serious illness, and physical injuries), Jasna Hudek-Knezevic and Igor Kardum conducted a study from a sample of 822 healthy volunteers (438 women and 384 men). Out of the Big Five personality traits, they found neuroticism most related to worse subjective health outcomes and optimistic control to better subjective health outcomes. When relating to objective health conditions, connections drawn were presented weak, except that neuroticism significantly predicted chronic illness, whereas optimistic control was more closely related to physical injuries caused by accident.Being highly conscientious may add as much as five years to one's life. The Big Five personality traits also predict positive health outcomes. In an elderly Japanese sample, conscientiousness, extraversion, and openness were related to lower risk of mortality.Higher conscientiousness is associated with lower obesity risk. In already obese individuals, higher conscientiousness is associated with a higher likelihood of becoming non-obese over a five-year period.
Effect of personality traits through life:
Education Academic achievement Personality plays an important role in academic achievement. A study of 308 undergraduates who completed the Five Factor Inventory Processes and reported their GPA suggested that conscientiousness and agreeableness have a positive relationship with all types of learning styles (synthesis-analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism shows an inverse relationship. Moreover, extraversion and openness were proportional to elaborative processing. The Big Five personality traits accounted for 14% of the variance in GPA, suggesting that personality traits make some contributions to academic performance. Furthermore, reflective learning styles (synthesis-analysis and elaborative processing) were able to mediate the relationship between openness and GPA. These results indicate that intellectual curiosity significantly enhances academic performance if students combine their scholarly interest with thoughtful information processing.A recent study of Israeli high-school students found that those in the gifted program systematically scored higher on openness and lower on neuroticism than those not in the gifted program. While not a measure of the Big Five, gifted students also reported less state anxiety than students not in the gifted program. Specific Big Five personality traits predict learning styles in addition to academic success.
Effect of personality traits through life:
GPA and exam performance are both predicted by conscientiousness neuroticism is negatively related to academic success openness predicts utilizing synthesis-analysis and elaborative-processing learning styles neuroticism negatively correlates with learning styles in general openness and extraversion both predict all four learning styles.Studies conducted on college students have concluded that hope, which is linked to agreeableness, conscientiousness, neuroticism, and openness, has a positive effect on psychological well-being. Individuals high in neurotic tendencies are less likely to display hopeful tendencies and are negatively associated with well-being. Personality can sometimes be flexible and measuring the big five personality for individuals as they enter certain stages of life may predict their educational identity. Recent studies have suggested the likelihood of an individual's personality affecting their educational identity.
Effect of personality traits through life:
Learning styles Learning styles have been described as "enduring ways of thinking and processing information".In 2008, the Association for Psychological Science (APS) commissioned a report that concludes that no significant evidence exists that learning-style assessments should be included in the education system. Thus it is premature, at best, to conclude that the evidence links the Big Five to "learning styles", or "learning styles" to learning itself.
Effect of personality traits through life:
However, the APS report also suggested that all existing learning styles have not been exhausted and that there could exist learning styles worthy of being included in educational practices. There are studies that conclude that personality and thinking styles may be intertwined in ways that link thinking styles to the Big Five personality traits. There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals.
Effect of personality traits through life:
As one example, Schmeck, Ribich, and Ramanaiah (1997) defined four types of learning styles: synthesis analysis methodical study fact retention elaborative processingWhen all four facets are implicated within the classroom, they will each likely improve academic achievement. This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often found to be more conscientious, intellectually open, and extraverted than shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing. The main functions of these four specific learning styles are as follows: Openness has been linked to learning styles that often lead to academic success and higher grades like synthesis analysis and methodical study. Because conscientiousness and openness have been shown to predict all four learning styles, it suggests that individuals who possess characteristics like discipline, determination, and curiosity are more likely to engage in all of the above learning styles.According to the research carried out by Komarraju, Karau, Schmeck & Avdic (2011), conscientiousness and agreeableness are positively related with all four learning styles, whereas neuroticism was negatively related with those four. Furthermore, extraversion and openness were only positively related to elaborative processing, and openness itself correlated with higher academic achievement.In addition, a previous study by psychologist Mikael Jensen has shown relationships between the Big Five personality traits, learning, and academic achievement. According to Jensen, all personality traits, except neuroticism, are associated with learning goals and motivation. Openness and conscientiousness influence individuals to learn to a high degree unrecognized, while extraversion and agreeableness have similar effects. Conscientiousness and neuroticism also influence individuals to perform well in front of others for a sense of credit and reward, while agreeableness forces individuals to avoid this strategy of learning. Jensen's study concludes that individuals who score high on the agreeableness trait will likely learn just to perform well in front of others.Besides openness, all Big Five personality traits helped predict the educational identity of students. Based on these findings, scientists are beginning to see that the Big Five traits might have a large influence of on academic motivation that leads to predicting a student's academic performance.Some authors suggested that Big Five personality traits combined with learning styles can help predict some variations in the academic performance and the academic motivation of an individual which can then influence their academic achievements. This may be seen because individual differences in personality represent stable approaches to information processing. For instance, conscientiousness has consistently emerged as a stable predictor of success in exam performance, largely because conscientious students experience fewer study delays. Conscientiousness shows a positive association with the four learning styles because students with high levels of conscientiousness develop focused learning strategies and appear to be more disciplined and achievement-oriented.
Effect of personality traits through life:
Personality and learning styles are both likely to play significant roles in influencing academic achievement. College students (308 undergraduates) completed the Five Factor Inventory and the Inventory of Learning Processes and reported their grade point average. Two of the Big Five traits, conscientiousness and agreeableness, were positively related with all four learning styles (synthesis analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism was negatively related with all four learning styles. In addition, extraversion and openness were positively related with elaborative processing. The Big Five together explained 14% of the variance in grade point average (GPA), and learning styles explained an additional 3%, suggesting that both personality traits and learning styles contribute to academic performance. Further, the relationship between openness and GPA was mediated by reflective learning styles (synthesis-analysis and elaborative processing). These latter results suggest that being intellectually curious fully enhances academic performance when students combine this scholarly interest with thoughtful information processing. Implications of these results are discussed in the context of teaching techniques and curriculum design.
Effect of personality traits through life:
Distance Learning When the relationship between the five-factor personality traits and academic achievement in distance education settings was examined in brief, the openness personality trait was found to be the most important variable that has a positive relationship with academic achievement in distance education environments. In addition, it was found that self-discipline, extraversion, and adaptability personality traits are generally in a positive relationship with academic achievement. The most important personality trait that has a negative relationship with academic achievement has emerged as neuroticism. The results generally show that individuals who are organized, planned, determined, who are oriented to new ideas and independent thinking have increased success in distance education environments. On the other hand, it can be said that individuals with anxiety and stress tendencies generally have lower academic success.
Effect of personality traits through life:
Employment Occupation and personality fit Researchers have long suggested that work is more likely to be fulfilling to the individual and beneficial to society when there is alignment between the person and their occupation. For instance, software programmers and scientists were generally more open to experiencing a variety of new activities, were intellectually curious, tended to think in symbols and abstractions, and found repetition boring.
Effect of personality traits through life:
Work success It is believed that the Big Five traits are predictors of future performance outcomes to varying degrees. Specific facets of the Big Five traits are also thought to be indicators of success in the workplace, and each individual facet can give a more precise indication as to the nature of a person. Different traits' facets are needed for different occupations; for example, those who excel in client-facing positions are typically warm and positive, which are sub-traits of agreeableness. These traits would typically not be needed as much in non-client facing roles. Various facets of the Big Five traits can predict the success of people in different environments. The estimated levels of an individual's success in jobs that require public speaking versus one-on-one interactions will differ according to whether that person has particular traits' facets. Job outcome measures include job and training proficiency and personnel data. However, research demonstrating such prediction has been criticized, in part because of the apparently low correlation coefficients characterizing the relationship between personality and job performance. In a 2007 article co-authored by six current or former editors of psychological journals, Dr. Kevin Murphy, Professor of Psychology at Pennsylvania State University and Editor of the Journal of Applied Psychology (1996–2002), states: The problem with personality tests is ... that the validity of personality measures as predictors of job performance is often disappointingly low. The argument for using personality tests to predict performance does not strike me as convincing in the first place.
Effect of personality traits through life:
Such criticisms were put forward by Walter Mischel, whose publication caused a two-decades' long crisis in personality psychometrics. However, later work demonstrated (1) that the correlations obtained by psychometric personality researchers were actually very respectable by comparative standards, and (2) that the economic value of even incremental increases in prediction accuracy was exceptionally large, given the vast difference in performance by those who occupy complex job positions.There have been studies that link national innovation to openness to experience and conscientiousness. Those who express these traits have showed leadership and beneficial ideas towards the country of origin.Some businesses, organizations, and interviewers assess individuals based on the Big Five personality traits. Research has suggested that individuals who are considered leaders typically exhibit lower amounts of neurotic traits, maintain higher levels of openness (envisioning success), balanced levels of conscientiousness (well-organized), and balanced levels of extraversion (outgoing, but not excessive).
Effect of personality traits through life:
Further studies have linked professional burnout to neuroticism, and extraversion to enduring positive work experience. When it comes to making money, research has suggested that those who are high in agreeableness (especially men) are not as successful in accumulating income.Some research suggests that vocational outcomes are correlated to Big Five personality traits. Conscientiousness predicts job performance in general. Conscientiousness is considered as top-ranked in overall job performance, research further categorized the Big 5 behaviors into 3 perspectives: task performance, organizational citizenship behavior, and counterproductive work behavior. Task performance is the set of activity that a worker is hired to complete, and results showed that Extraversion ranked second after the Conscientiousness, with Emotional Stability tied with Agreeableness ranked third. For organizational citizenship behavior, relatively less tied to the specific task core but benefits an organization by contributing to its social and psychological environment, Agreeableness and Emotional Stability ranked second and third. Lastly, Agreeableness tied with Conscientiousness as top ranked for Counterproductive work behavior, which refers to intentional behavior that is counter to the legitimate interests of the organization or its members.In addition, research has demonstrated that agreeableness is negatively related to salary. Those high in agreeableness make less, on average, than those low in the same trait. Neuroticism is also negatively related to salary while conscientiousness and extraversion are positive predictors of salary. Occupational self-efficacy has also been shown to be positively correlated with conscientiousness and negatively correlated with neuroticism. Significant predictors of career-advancement goals are: extraversion, conscientiousness, and agreeableness. Some research has also suggested that the Conscientiousness of a supervisor is positively associated with an employee's perception of abusive supervision. While others have suggested that those with low agreeableness and high neuroticism are traits more related to abusive supervision.A 2019 study of Canadian adults found conscientiousness to be positively associated with wages, while agreeableness, extraversion, and neuroticism were negatively associated with wages. In the United States, by contrast, no negative correlation between extraversion and wages has been found. Also, the magnitudes found for agreeableness and conscientiousness in this study were higher for women than for men (i.e. there was a higher negative penalty for greater agreeableness in women, as well as a higher positive reward for greater conscientiousness).Research designed to investigate the individual effects of Big Five personality traits on work performance via worker completed surveys and supervisor ratings of work performance has implicated individual traits in several different work roles performances. A "work role" is defined as the responsibilities an individual has while they are working. Nine work roles have been identified, which can be classified in three broader categories: proficiency (the ability of a worker to effectively perform their work duties), adaptivity (a workers ability to change working strategies in response to changing work environments), and proactivity (extent to which a worker will spontaneously put forth effort to change the work environment). These three categories of behavior can then be directed towards three different levels: either the individual, team, or organizational level leading to the nine different work role performance possibilities.
Effect of personality traits through life:
Openness is positively related to proactivity at the individual and the organizational levels and is negatively related to team and organizational proficiency. These effects were found to be completely independent of one another. This is also counter-conscientious and has a negative correlation to Conscientiousness.
Agreeableness is negatively related to individual task proactivity. Typically this is associated with lower career success and being less able to cope with conflict. That said, attributes related to Agreeableness are important for workforce readiness for a variety of occupations and performance criteria.
Extraversion is negatively related to individual task proficiency. Extraversion has a higher job and life satisfaction but more impulsive behaviors.
Conscientiousness is positively related to all forms of work role performance. This has a higher leadership effectiveness and lower deviance behaviors but also lower learning in skill acquisition.
Effect of personality traits through life:
Neuroticism is negatively related to all forms of work role performance. This has a trend to engage in more risky behaviors Two theories have been integrated in an attempt to account for these differences in work role performance. Trait activation theory posits that within a person trait levels predict future behavior, that trait levels differ between people, and that work-related cues activate traits which leads to work relevant behaviors. Role theory suggests that role senders provide cues to elicit desired behaviors. In this context, role senders (i.e.: supervisors, managers, etc.) provide workers with cues for expected behaviors, which in turn activates personality traits and work relevant behaviors. In essence, expectations of the role sender lead to different behavioral outcomes depending on the trait levels of individual workers and because people differ in trait levels, responses to these cues will not be universal.
Effect of personality traits through life:
Romantic relationships The Big Five model of personality was used for attempts to predict satisfaction in romantic relationships, relationship quality in dating, engaged, and married couples.Dating couples Self-reported relationship quality is negatively related to partner-reported neuroticism and positively related to both self- and partner-reported conscientiousnessEngaged couples Self-reported relationship quality was higher among those high in partner-reported openness, agreeableness and conscientiousness.
Effect of personality traits through life:
Self-reported relationship quality was higher among those high in self-reported extraversion and agreeableness.
Self-reported relationship quality is negatively related to both self- and partner-reported neuroticism Observers rated the relationship quality higher if the participating partner's self-reported extraversion was highMarried couples High self-reported neuroticism, extraversion, and agreeableness are related to high levels of self-reported relationship quality Partner-reported agreeableness is related to observed relationship quality.These reports are, however, rare and not conclusive.
Effect of personality traits through life:
Political identification The Big Five Personality Model also has applications in the study of political psychology. Studies have been finding links between the big five personality traits and political identification. It has been found by several studies that individuals who score high in Conscientiousness are more likely to possess a right-wing political identification. On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and a left-leaning ideology. While the traits of agreeableness, extraversion, and neuroticism have not been consistently linked to either conservative or liberal ideology, with studies producing mixed results, such traits are promising when analyzing the strength of an individual's party identification. However, correlations between the Big Five and political beliefs, while present, tend to be small, with one study finding correlations ranged from 0.14 to 0.24.
Effect of personality traits through life:
Scope of predictive power The predictive effects of the Big Five personality traits relate mostly to social functioning and rules-driven behavior and are not very specific for prediction of particular aspects of behavior. For example, it was noted by all temperament researchers that high neuroticism precedes the development of all common mental disorders and is not associated with personality. Further evidence is required to fully uncover the nature and differences between personality traits, temperament and life outcomes. Social and contextual parameters also play a role in outcomes and the interaction between the two is not yet fully understood.
Effect of personality traits through life:
Religiosity Though the effect sizes are small: Of the Big Five personality traits high Agreeableness, Conscientiousness and Extraversion relate to general religiosity, while Openness relate negatively to religious fundamentalism and positively to spirituality. High Neuroticism may be related to extrinsic religiosity, whereas intrinsic religiosity and spirituality reflect Emotional Stability.
Measurements:
Several measures of the Big Five exist: International Personality Item Pool (IPIP) NEO-PI-R The Ten-Item Personality Inventory (TIPI) and the Five Item Personality Inventory (FIPI) are very abbreviated rating forms of the Big Five personality traits.
Measurements:
Self-descriptive sentence questionnaires Lexical questionnaires Self-report questionnaires Relative-scored Big 5 measureThe most frequently used measures of the Big Five comprise either items that are self-descriptive sentences or, in the case of lexical measures, items that are single adjectives. Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balanced International English Big-Five Mini-Markers or a very brief (10 item) measure of the Big Five domains. Research has suggested that some methodologies in administering personality tests are inadequate in length and provide insufficient detail to truly evaluate personality. Usually, longer, more detailed questions will give a more accurate portrayal of personality. The five factor structure has been replicated in peer reports. However, many of the substantive findings rely on self-reports.
Measurements:
Much of the evidence on the measures of the Big 5 relies on self-report questionnaires, which makes self-report bias and falsification of responses difficult to deal with and account for. It has been argued that the Big Five tests do not create an accurate personality profile because the responses given on these tests are not true in all cases and can be falsified. For example, questionnaires are answered by potential employees who might choose answers that paint them in the best light.Research suggests that a relative-scored Big Five measure in which respondents had to make repeated choices between equally desirable personality descriptors may be a potential alternative to traditional Big Five measures in accurately assessing personality traits, especially when lying or biased responding is present. When compared with a traditional Big Five measure for its ability to predict GPA and creative achievement under both normal and "fake good"-bias response conditions, the relative-scored measure significantly and consistently predicted these outcomes under both conditions; however, the Likert questionnaire lost its predictive ability in the faking condition. Thus, the relative-scored measure proved to be less affected by biased responding than the Likert measure of the Big Five.
Measurements:
Andrew H. Schwartz analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age.
Critique:
The proposed Big Five model has been subjected to considerable critical scrutiny in a number of published studies. One prominent critic of the model has been Jack Block at the University of California, Berkeley. In response to Block, the model was defended in a paper published by Costa and McCrae. This was followed by a number of published critical replies from Block.It has been argued that there are limitations to the scope of the Big Five model as an explanatory or predictive theory. It has also been argued that measures of the Big Five account for only 56% of the normal personality trait sphere alone (not even considering the abnormal personality trait sphere). Also, the static Big Five is not theory driven, it is merely a statistically driven investigation of certain descriptors that tend to cluster together often based on less-than-optimal factor analytic procedures.: 431–33 Measures of the Big Five constructs appear to show some consistency in interviews, self-descriptions and observations, and this static five-factor structure seems to be found across a wide range of participants of different ages and cultures. However, while genotypic temperament trait dimensions might appear across different cultures, the phenotypic expression of personality traits differs profoundly across different cultures as a function of the different socio-cultural conditioning and experiential learning that takes place within different cultural settings.Moreover, the fact that the Big Five model was based on lexical hypothesis (i.e. on the verbal descriptors of individual differences) indicated strong methodological flaws in this model, especially related to its main factors, Extraversion and Neuroticism. First, there is a natural pro-social bias of language in people's verbal evaluations. After all, language is an invention of group dynamics that was developed to facilitate socialization and the exchange of information and to synchronize group activity. This social function of language therefore creates a sociability bias in verbal descriptors of human behavior: there are more words related to social than physical or even mental aspects of behavior. The sheer number of such descriptors will cause them to group into the largest factor in any language, and such grouping has nothing to do with the way that core systems of individual differences are set up. Second, there is also a negativity bias in emotionality (i.e. most emotions have negative affectivity), and there are more words in language to describe negative rather than positive emotions. Such asymmetry in emotional valence creates another bias in language. Experiments using the lexical hypothesis approach indeed demonstrated that the use of lexical material skews the resulting dimensionality according to a sociability bias of language and a negativity bias of emotionality, grouping all evaluations around these two dimensions. This means that the two largest dimensions in the Big Five model might be just an artifact of the lexical approach that this model employed.
Critique:
Limited scope One common criticism is that the Big Five does not explain all of human personality. Some psychologists have dissented from the model precisely because they feel it neglects other domains of personality, such as religiosity, manipulativeness/machiavellianism, honesty, sexiness/seductiveness, thriftiness, conservativeness, masculinity/femininity, snobbishness/egotism, sense of humour, and risk-taking/thrill-seeking. Dan P. McAdams has called the Big Five a "psychology of the stranger", because they refer to traits that are relatively easy to observe in a stranger; other aspects of personality that are more privately held or more context-dependent are excluded from the Big Five.There may be debate as to what counts as personality and what does not and the nature of the questions in the survey greatly influence outcome. Multiple particularly broad question databases have failed to produce the Big Five as the top five traits.In many studies, the five factors are not fully orthogonal to one another; that is, the five factors are not independent. Orthogonality is viewed as desirable by some researchers because it minimizes redundancy between the dimensions. This is particularly important when the goal of a study is to provide a comprehensive description of personality with as few variables as possible.
Critique:
Methodological issues Factor analysis, the statistical method used to identify the dimensional structure of observed variables, lacks a universally recognized basis for choosing among solutions with different numbers of factors. A five factor solution depends on some degree of interpretation by the analyst. A larger number of factors may underlie these five factors. This has led to disputes about the "true" number of factors. Big Five proponents have responded that although other solutions may be viable in a single data set, only the five-factor structure consistently replicates across different studies.Surveys in studies are often online surveys of college students. Results do not always replicate when run on other populations or in other languages.Moreover, the factor analysis that this model is based on is a linear method incapable of capturing nonlinear, feedback and contingent relationships between core systems of individual differences.
Critique:
Theoretical status A frequent criticism is that the Big Five is not based on any underlying theory; it is merely an empirical finding that certain descriptors cluster together under factor analysis. Although this does not mean that these five factors do not exist, the underlying causes behind them are unknown.
Jack Block's final published work before his death in January 2010 drew together his lifetime perspective on the five-factor model.He summarized his critique of the model in terms of: the atheoretical nature of the five-factors.
their "cloudy" measurement.
the model's inappropriateness for studying early childhood.
the use of factor analysis as the exclusive paradigm for conceptualizing personality.
the continuing non-consensual understandings of the five-factors.
the existence of unrecognized but successful efforts to specify aspects of character not subsumed by the five-factors.He went on to suggest that repeatedly observed higher order factors hierarchically above the proclaimed Big Five personality traits may promise deeper biological understanding of the origins and implications of these superfactors.
Critique:
Evidence for six factors rather than five It has been noted that even though early lexical studies in the English language indicated five large groups of personality traits, more recent, and more comprehensive, cross-language studies have provided evidence for six large groups rather than five, with the sixth factor being Honesty-Humility. These six groups form the basis of the HEXACO model of personality structure. Based on these findings it has been suggested that the Big Five system should be replaced by HEXACO, or revised to better align with lexical evidence. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Marriage proposal planner**
Marriage proposal planner:
A marriage proposal planner is a professional event coordinator who specializes in planning marriage proposals. A proposal planner is a relatively new profession in the wedding industry. Proposal planners suggest marriage proposal ideas, scout proposal locations, negotiate rates with vendors, draw up contracts, hire photographers, create romantic setups, acquire permits, and help clients choose engagement rings. Proposal planners interview the proposer and ask questions about the couple. They then use those answers to create a unique proposal idea.The mass media and social media are partly responsible for the emergence of proposal planners. There are fancy proposals on TV, and women's expectations are rising. Men have trouble meeting those expectations without some help. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Quantization (physics)**
Quantization (physics):
In physics, quantisation (in American English quantization) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics.
Historical overview:
In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds E=hν for the frequency ν . Here, h is called Planck's constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities.
Historical overview:
In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon". In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules". The preceding theories have been successful, but they are very phenomenological theories. However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta".The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics. (1931).
Canonical quantization:
Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state.
Quantization schemes:
Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: Classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, the Groenewold–van Hove theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables. See Groenewold's theorem for one version of this result.
Covariant canonical quantization:
There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach.
Covariant canonical quantization:
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
Covariant canonical quantization:
In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism.
Deformation quantization:
One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions.
Deformation quantization:
More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory. For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term 3ħ2/2. (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing l.)As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics.
Geometric quantization:
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
Geometric quantization:
A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space.
Path integral quantization:
A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation.
Other types:
Loop quantum gravity (loop quantization) Uncertainty principle (quantum statistical mechanics approach) Schwinger's quantum action principle | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Linguistic performance**
Linguistic performance:
The term linguistic performance was used by Noam Chomsky in 1960 to describe "the actual use of language in concrete situations". It is used to describe both the production, sometimes called parole, as well as the comprehension of language. Performance is defined in opposition to "competence"; the latter describes the mental knowledge that a speaker or listener has of language.Part of the motivation for the distinction between performance and competence comes from speech errors: despite having a perfect understanding of the correct forms, a speaker of a language may unintentionally produce incorrect forms. This is because performance occurs in real situations, and so is subject to many non-linguistic influences. For example, distractions or memory limitations can affect lexical retrieval (Chomsky 1965:3), and give rise to errors in both production and perception. Such non-linguistic factors are completely independent of the actual knowledge of language, and establish that speakers' knowledge of language (their competence) is distinct from their actual use of language (their performance).
Background:
Langue versus parole Published in 1916, Ferdinand de Saussure's Course in General Linguistics describes language as "a system of signs that express ideas". de Saussure describes two components of language: langue and parole. Langue consists of the structural relations that define a language, which includes grammar, syntax and phonology. Parole is the physical manifestation of signs; in particular the concrete manifestation of langue as speech or writing. While langue can be viewed strictly as a system of rules, it is not an absolute system such that parole must utterly conform to langue. Drawing an analogy to chess, de Saussure compares langue to the rules of chess that define how the game should be played, and parole to the individual choices of a player given the possible moves allowed within the system of rules.
Background:
Competence versus performance Proposed in the 1950s by Noam Chomsky, generative grammar is an analysis approach to language as a structural framework of the human mind. Through formal analysis of components such as syntax, morphology, semantics and phonology, a generative grammar seeks to model the implicit linguistic knowledge with which speakers determine grammaticality.
Background:
In transformational generative grammar theory, Chomsky distinguishes between two components of language production: competence and performance. Competence describes the mental knowledge of a language, the speaker's intrinsic understanding of sound-meaning relations as established by linguistic rules. Performance – that is the actual observed use of language – involves more factors than phonetic-semantic understanding. Performance requires extra-linguistic knowledge such as an awareness of the speaker, audience and the context, which crucially determines how speech is constructed and analyzed. It is also governed by principles of cognitive structures not considered aspects of language, such as memory, distractions, attention, and speech errors.
Background:
I-Language versus E-Language In 1986, Chomsky proposed a distinction similar to the competence/performance distinction, entertaining the notion of an I-Language (internal language) which is the intrinsic linguistic knowledge within a native speaker and E-Language (external language) which is the observable linguistic output of a speaker. It was I-Language that Chomsky argued should be the focus of inquiry, and not E-Language.E-language has been used to describe the application of artificial systems, such as in calculus, set theory and with natural language viewed as sets, while performance has been used purely to describe applications of natural language. Between I-Language and competence, I-Language refers to our intrinsic faculty for language, competence is used by Chomsky as an informal, general term, or as term with reference to a specific competency such as "grammatical competence" or "pragmatic competence".
Performance-grammar correspondence hypothesis:
John A. Hawkins's Performance-Grammar Correspondence Hypothesis (PGCH) states that the syntactic structures of grammars are conventionalized based on whether and how much the structures are preferred in performance. Performance preference is related to structure complexity and processing, or comprehension, efficiency. Specifically, a complex structure refers to a structure containing more linguistic elements or words at the end of the structure than at the beginning. It is this structural complexity that results in decreased processing efficiency since more structure requires additional processing. This model seeks to explain word order across languages based on avoidance of unnecessary complexity in favour of increased processing efficiency. Speakers make an automatic calculation of the Immediate Constituent(IC)-to-word order ratio and produce the structure with the highest ratio. Structures with a high IC-to-word order are structures that contain the fewest words required for the listener to parse the structure into constituents which results in more efficient processing.
Performance-grammar correspondence hypothesis:
Head-initial structures In head-initial structures, which includes example SVO and VSO word order, the speaker's goal is to order the sentence constituents from least to most complex.
Performance-grammar correspondence hypothesis:
SVO word order SVO word order can be exemplified with English; consider the example sentences in (1). In (1a) three immediate constituents (ICs) are present in the verb phrase, namely VP, PP1 and PP2, and there are four words (went, to, London, in) required to parse the VP into its constituents. Therefore, the IC-to-word ratio is 3/4=75%. In contrast, in (1b) the VP is still composed of three ICs but there are now six words that are required to determine the constituent structure of the VP (went, in, the, late, afternoon, to). Thus, the ratio for (1b) is 3/6 = 50%. Hawkins proposes that speakers prefer to produce (1a) since it has a higher IC-to-word ratio and this leads to faster and more efficient processing.
Performance-grammar correspondence hypothesis:
1a. John [VP went [PP1 to London] [[PP2 in the late afternoon]] 1b. John [VP went [PP2 in the late afternoon]] [[PP1 to London]] Hawkins supports the above analysis by providing performance data to demonstrate the preference speakers have for ordering short phrases before long phrases when producing head-initial structures. The table based on English data, below, illustrates that the short prepositional phrase (PP1) is preferentially ordered before the long PP (PP2) and that this preference increases as the size differential between the two PPs increases. For example, 60% of the sentences are ordered short (PP1) to long (PP2) when PP2 was longer than PP1 by 1 word. In contrast, 99% of the sentences are ordered short to long when PP2 is longer than PP1 by 7+ words.
Performance-grammar correspondence hypothesis:
English prepositional phrase orderings by relative weight PP2 = longer PP; PP1=shorter PP. Proportion of short-long to long-short as a percentage; actual numbers of sequences in parentheses. An additional 71 sequences had PPs of equal length (total n=394) VSO word order Hawkins argues that the preference for short followed by long phrases applies to all languages that have head-initial structuring. This includes languages with VSO word order such as from Hungarian. By calculating the IC-to-word ratio for the Hungarian sentences in the same way as was done for the English sentences, 2a. emerges as having a higher ratio than 2b.
Performance-grammar correspondence hypothesis:
2a. VP[Döngetik NP[facipöink NP[az utcakat] ] batter wooden shoes-1PL the streets-ACC Our wooden shoes batter the streets 2b. VP[Döngetik NP[az utcakat] NP[[ facipöink ] ] The Hungarian performance data (below) show the same preference pattern as the English data. This study looked at the ordering of two successive noun phrases (NPs) and found that the shorter NP followed by the longer NP is preferred in performance, and that this preference increases as the size differential between NP1 and NP2 increases.
Performance-grammar correspondence hypothesis:
Hungarian noun phrase orderings by relative weight mNP = any NP constructed on its left periphery. NP2 = longer NP; NP1 = shorter NP. Proportion of short-long to long-short given as a percentage; actual numbers of sequences given in parentheses. An additional 21 sequences had NPs of equal length (total n = 16).
Performance-grammar correspondence hypothesis:
Head-final structures Hawkins' explanation of performance and word order extends to head-final structures. For example, since Japanese is a SOV language the head (V) is at the end of the sentence. This theory predicts that speakers will prefer to order the phrases in head-final sentences from long phrases to short, as opposed to short to long as seen in head-initial languages. This reversal of ordering preference is due to the fact that in head-final sentences it is the long followed by short phrasal ordering that has the higher IC-to-word ratio.
Performance-grammar correspondence hypothesis:
3a. Tanaka ga vp[pp[Hanako kara]np[sono hon o] katta] Tanaka NOM Hanako from that book ACC bought Tanako bought that book from Hanako 3b. Tanaka ga vp[np[sono hon o] pp[Hanako kara] [katta] The VP and its constituents in 4. are constructed from their heads on the right. This means that the number of words used to calculate the ratio is counted from the head of the first phrase (PP in 3a. and NP in 3b.) to the verb (as indicated in bold above). The IC-to-word ratio for the VP in 3a. is 3/5=60% while the ratio for the VP in 3b. is 3/4=75%. Therefore, 3b. should be preferred by Japanese speakers since it has a higher IC-to-word ratio which leads to faster parsing of sentences by the listener.The performance preference for long to short phrase ordering in SVO languages is supported by performance data. The table below shows that production of long to short phrases is preferred and that this preference increases as the size of the differential between the two phrases increases. For example, ordering of the longer 2ICm (where ICm is either a direct object NP with an accusative case particle or a PP constructed from the right periphery) before the shorter 1ICm is more frequent, and the frequency increases to 91% if the 2ICm is longer than the 1ICm by 9+ words.
Performance-grammar correspondence hypothesis:
Japanese NPo and PPm orderings by relative weight Npo = direct object NP with accusative case particle. PPm = PP constructed on its right periphery by a P(ostposition). ICm= either NPo or PPm. 2IC=longer IC; 1IC = shorter IC. Proportion of long-to short to short-long orders given as a percentage; actual numbers of sequences in parentheses. an additional 91 sequences had ICs of equal length (total n=244)
Utterance planning hypothesis:
Tom Wasow proposes that word order arises as a result of utterance planning benefiting the speaker. He introduces the concepts of early versus late commitment, where commitment is the point in the utterance where it becomes possible to predict subsequent structure. Specifically, early commitment refers to the commitment point present earlier in the utterance and late commitment refers to the commitment point present later in the utterance. He explains that early commitment will favour the listener since early prediction of subsequent structure enables faster processing. Comparatively, late commitment will favour the speaker by postponing decision making, giving the speaker more time to plan the utterance. Wasow illustrates how utterance planning influences syntactic word order by testing early versus late commitment in heavy-NP shifted (HNPS) sentences. The idea is to examine the patterns of HNPS to determine if the performance data show sentences that are structured to favour the speaker or the listener.
Utterance planning hypothesis:
Examples of early/late commitment and heavy-NP shift The following examples illustrate what is meant by early versus late commitment and how heavy-NP shift applies to these sentences. Wasow looked at two types of verbs:Vt (transitive verbs): require NP objects.
Utterance planning hypothesis:
4a. Pat VP[brought NP[a box with a ribbon around it] PP[ [to the party] ] 4b. Pat VP[brought PP[to the party] NP[ [a box with a ribbon around it] ] In 4a. no heavy-NP shift has been applied. The NP is available early but does not provide any additional information about the sentence structure – the "to" appearing late in the sentence is an example of late commitment. In contrast, in 4b., where heavy-NP shift has shifted the NP to the right, as soon as "to" is uttered the listener knows that the VP must contain the NP and a PP. In other words, when "to" is uttered it allows the listener to predict the remaining structure of the sentence early on. Thus for transitive verbs HNPS results in early commitment and favors the listener.
Utterance planning hypothesis:
Vp (prepositional verbs): can take an NP object or an immediately following PP with no NP object 5a. Pat VP[wrote NP[something about Chris] PP[ [on the blackboard]].
Utterance planning hypothesis:
5b. Pat VP[wrote PP[on the blackboard] NP[ [something about Chris.]] No HNPS has been applied to 5a. In 5b. the listener needs to hear the word "something" in order to know that the utterance contains a PP and an NP since the object NP is optional but "something" has been shifted to later in the sentence. Thus for prepositional verbs HNPS results in late commitment and favours the speaker.
Utterance planning hypothesis:
Predictions and findings Based on the above information Wasow predicted that if sentences are constructed from the speaker's perspective then heavy-NP shift would rarely apply to sentences containing a transitive verb but would apply frequently to sentences containing a prepositional verb. The opposite prediction was made if sentences are constructed from the listener's perspective.
Utterance planning hypothesis:
To test his predictions Wasow analyzed performance data (from corpora data) for the rates of occurrence of HNPS for Vt and Vp and found HNPS occurred twice as frequently in Vp than in Vt, therefore supporting the predictions made from the speaker's perspective. In contrast, he did not find evidence in support of the predictions made based on the listener's perspective. In other words, given the data above, when HNPS is applied to sentences containing a transitive verb the result favors the listener. Wasow found that HNPS applied to transitive verb sentences is rare in performance data thus supporting the speaker's perspective. Additionally, when HNPS is applied to prepositional verb structures the result favors the speaker. In his study of the performance data, Wasow found evidence of HNPS frequently applied to prepositional verb structures further supporting the speaker's perspective. Based on these findings Wasow concludes that HNPS is correlated with the speaker's preference for late commitment thereby demonstrating how speaker performance preference can influence word order.
Alternative grammar models:
While the dominant views of grammar are largely oriented towards competence, many, including Chomsky himself, have argued that a complete model of grammar should be able to account for performance data. But while Chomsky argues that competence should be studied first, thereby allowing further study of performance, some systems, such as constraint grammars are built with performance as a starting point (comprehension, in the case of constraint grammars While traditional models of generative grammar have had a great deal of success in describing the structure of languages, they have been less successful in describing how language is interpreted in real situations. For example, traditional grammar describes a sentence as having an "underlying structure" which is different from the "surface structure" which speakers actually produce. In a real conversation, however, a listener interprets the meaning of a sentence in real time, as the surface structure goes by. This kind of on-line processing, which accounts for phenomena such as finishing another person's sentence, and starting a sentence without knowing how it is going to finish, is not directly accounted for in traditional generative models of grammar. Several alternative grammar models exist which may be better able to capture this surface-based aspect of linguistic performance, including Constraint Grammar, Lexical Functional Grammar, and Head-driven phrase structure grammar.
Errors in linguistic performance:
Errors in linguistic performance not only occur in children newly acquiring their native language, second language learners, those with a disability or an acquired brain injury but among competent speakers as well. Types of performance errors that will be of focus here are those that involve errors in syntax, other types of errors can occur in the phonological, semantic features of words, for further information see speech errors. Phonological and semantic errors can be due to the repetition of words, mispronunciations, limitations in verbal working memory, and length of the utterance. Slips of the tongue are most common in spoken languages and occur when the speaker either: says something they did not mean to; produces the incorrect order of sounds or words; or uses the incorrect word. Other instances of errors in linguistic performance are slips of the hand in signed languages, slips of the ear which are errors in comprehension of utterances and slips of the pen which occur while writing. Errors of linguistic performance are perceived by both the speaker and the listener and can therefore have many interpretations depending on the persons judgement and the context in which the sentence was spoken.It is proposed that there is a close relation between the linguistic units of grammar and the psychological units of speech which implies that there is a relation between linguistic rules and the psychological processes that create utterances. Errors in performance can occur at any level of these psychological processes. Lise Menn proposes that there are five levels of processing in speech production, each with its own possible error that could occur. According to the proposed speech processing structure by Menn an error in the syntactic properties of an utterance occurs at the positional level.
Errors in linguistic performance:
Message Level Functional Level Positional Level Phonological Encoding Speech GestureAnother proposal for the levels of speech processing is made by Willem J. M. Levelt to be structured as so: Conceptualization Formulation Articulation Self-MonitoringLevelt (1993) states that we as speakers are unaware of most of these levels of performance such as articulation, which includes the movement and placement of the articulators, the formulation of the utterance which includes the words selected and their pronunciation and the rules which must be followed for the utterance to be grammatical. The levels speakers are consciously aware is the intent of the message which occurs at the level of conceptualization and then again at self-monitoring which is when the speaker would become aware of any errors that may have occurred and correct themselves.
Errors in linguistic performance:
Slips of the tongue One type of slip of the tongue which cause an error in the syntax of the utterance are called transformational errors. Transformational errors are a mental operation proposed by Chomsky in his Transformational Hypothesis and it has three parts which errors in performance can occur. These transformations are applied at the level of the underlying structures and predict the ways in which an error can occur.
Errors in linguistic performance:
Structural analysis Structural Change ConditionsStructural Analysis errors can occur due to the application of (a) the rule misanalyzing the tense marker causing the rule to apply incorrectly, (b) the rule not being applied when it should or (c) a rule being applied when it should not.
Errors in linguistic performance:
This example from Fromkin (1980) demonstrates a rule misanalyzing the tense marker and for subject-auxiliary inversion to be incorrectly applied. The subject-auxiliary inversion is misanalyzed as to which structure it applies, applying without the verb be in the tense as it moves to the C position. This causes "do-support" to occur and the verb to lack tense causing the syntactic error.
Errors in linguistic performance:
6a. Error: Why do you be an oaf sometimes? 6b. Target: Why are you an oaf sometimes? The following example from Fromkin (1980) demonstrates how a rule is being applied when it should not. The subject-auxiliary inversion rule is omitted in the error utterance, causing affix-hopping to occur and putting the tense onto the verb "say" creating the syntactic error. In the target the subject-auxiliary rule and then do-support applies creating the grammatically correct structure.
Errors in linguistic performance:
7a. Error: And what he said? 7b. Target: And what did he say? This example from Fromkin (1980) shows how a rule is being applied when it should not. The subject-auxiliary inversion and do-support has applied to an idiomatic expression causing the insertion of "do" when it should not be applied in the ungrammatical utterance.
8a. Error: How do we go!! 8b. Target: How we go!! Structural Change Errors can occur in the carrying out of rules, even though the analysis of the phrase marker is done correctly. This can occur when the analysis requires multiple rules to occur.
Errors in linguistic performance:
The following example from Fromkin (1980) shows the relative clause rule copies the determiner phrase "a boy" within the clause and this causes front attaching to the Wh-marker. Deletion is then skipped, leaving the determiner phrase in the clause in the error utterance causing it to be ungrammatical. 9a. Error: A boy who I know a boy has hair down to here.
Errors in linguistic performance:
9b. Target: A boy who I know has hair down to here.
Conditions errors restrict when the rule can and cannot be applied.
Errors in linguistic performance:
This last example from Fromkin (1980) shows that a rule was applied under a certain condition in which it is restricted. The subject-auxiliary inversion rule cannot apply to embedded clauses. In the case of this example it has causing for the syntactic error. 10a. Error: I know where is a top for it. 10b. Target: I know where a top for it is. A study of deaf Italians found that the second person singular of indicatives would extend to corresponding forms in imperatives and negative imperatives.
Errors in linguistic performance:
The following is an example taken from Dutch data in which there is verb omission in the embedded clause of the utterance (which is not allowed in Dutch), resulting in a performance error.
A study done with Zulu speaking children with a language delay displayed errors in linguistic performance of lacking proper passive verb morphology.
Errors in linguistic performance:
Slips of the hand The linguistic components of American Sign Language (ASL) can be broken down into four parts; the hand configuration, place of articulation, movement and other minor parameters. Hand configuration is determined by the shape of the hand, fingers and thumbs and is specific to the sign that is being used. It allows the signer to articulate what they are wanting to communicate by extending, flexing, bending or spreading the digits; the position of the thumb to the fingers; or the curvature of the hand. However, there are not an infinite amount of possible hand configurations, there are 19 classes of hand configuration primes as listed by the Dictionary of American Sign Language. Place of articulation is the particular location that the sign is being performed known as the "signing place". The "signing place" can be the whole face or a particular part of it, the eyes, nose, cheek, ear, neck, trunk, any part of the arm, or the neutral area in front of the signers head and body. Movement is the most complex as it can be difficult to analyze. Movement is restricted to directional, rotations of the wrist, local movements of the hand and interactions of the hands. These movements can occur singularly, in sequence, or simultaneously. Minor parameters in ASL include contacting region, orientation and hand arrangement. They are subclasses of hand configuration.
Errors in linguistic performance:
Performance errors resulting in ungrammatical signs can result due to processes that change the hand configuration, place, movement or other parameter of the sign. These processes can be anticipation, preservation, or metathesis. Anticipation is caused when some characteristic of the next sign is incorporated into the sign that is presently being performed. Preservation is the opposite of anticipation where some characteristic of the preceding sign is carried over into the performance of the next sign. Metathesis occurs when two characteristics of adjacent signs are combined into one in the performance of both signs. Each of these errors will result in an incorrect sign being performed. This could result in either a different sign being performed instead of the intended one, or nonexistent signs which forms are possible and those which forms are not possible due to the structural rules. These are the main types of performance errors in sign language however on the rare occasion there is also the possibility of errors in the order of the signs performed resulting in a different meaning than what the signer intended.
Errors in linguistic performance:
Other types of errors Unacceptable Sentences are ones which, although are grammatical, are not considered proper utterances. They are considered unacceptable due to the lack of our cognitive systems to process them. Speakers and listeners can be aided in the performance and processing of these sentences by eliminating time and memory constraints, increasing motivation to process these utterances and using pen and paper. In English there are three types of sentences that are grammatical but are considered unacceptable by speakers and listeners.
Errors in linguistic performance:
Repeated self-embedded clauses: The cheese that the rat that the cat chased ate is on the table.
Multi Right Branching: This is the cat that caught the rat that ate the cheese that is on the table.
Errors in linguistic performance:
Ambiguity or Garden Path Sentences: The horse raced past the barn fellWhen a speaker makes an utterance they must translate their ideas into words, then syntactically proper phrases with proper pronunciation. The speaker must have prior world knowledge and an understanding of the grammatical rules that their language enforces. When learning a second language or with children acquiring their first language, speakers usually have this knowledge before they are able to produce them. Their speech is usually slow and deliberate, using phrases they have already mastered, and with practice their skills increase. Errors of linguistic performance are judged by the listener giving many interpretations if an utterance is well-formed or ungrammatical depending on the individual. As well the context in which an utterance is used can determine if the error would be considered or not. When comparing "Who must telephone her?" and "Who need telephone her?" the former would be considered the ungrammatical phrase. However, when comparing it to "Who want telephone her?" it would be considered the grammatical phrase. The listener may also be the speaker. When repeating sentences with errors if the error is not comprehended then it is performed. As well if the speaker does notice the error in the sentence they are supposed to repeat they are unaware of the difference between their well-formed sentence and the ungrammatical sentence.
Errors in linguistic performance:
An unacceptable utterance can also be performed due to a brain injury. Three types of brain injuries that could cause errors in performance were studied by Fromkin are dysarthria, apraxia and literal paraphasia. Dysarthria is a defect in the neuromuscular connection that involves speech movement. The speech organs involved can be paralyzed or weakened, making it difficult or impossible for the speaker to produce a target utterance. Apraxia is when there is damage to the ability to initiate speech sounds with no paralysis or weakening of the articulators. Literal paraphasia causes disorganization of linguistic properties, resulting in errors of word order of phonemes. Having a brain injury and being unable to perform proper linguistic utterances, some individuals are still able to process complex sentences and formulate syntactically well formed sentences in their mind.Child productions when they are acquiring language are full of errors of linguistic performance. Children must go from imitating adult speech to create new phrases of their own. They will need to use their cognitive operations of the knowledge of their language they are learning to determine the rules and properties of that language. The following are examples of errors in English speaking children's productions. "I goed" "He runned"In an elicited production experiment a child, Adam, was prompted to ask questions to an Old Lady
Performance measures:
Mean length of utterance The most commonly used measure of syntax complexity is the mean length of utterance, also known as MLU. This measure is independent from how often children talk and focuses on the complexity and development of their grammatical systems, including morphological and syntactic development. The number representing a person's MLU corresponds to the complexity of the syntax being used. In general, as the MLU increases, the syntactic complexity also increases. Typically, the average MLU corresponds to a child's age due to their increase in working memory, which allows for sentences to be of greater syntactic complexity. For example, the average MLU of a 7-year-old child is 7 words. However, children show more individual variability of syntactic performance with more complex syntax. Complex syntax have a higher number of phrases and clause levels, therefore adding more words to the overall syntactic structure. Seeing as there are more individual differences in MLU and syntactic development as children get older, MLU is particularly used to measure grammatical complexity among school-aged children. Other types of segmentation strategies for discourse are the T-unit and C-unit (communicative unit). If these two measurements are used to account for discourse, the average length of the sentence will be lower than if MLU is used alone. Both the T-units and C-units count each clause as a new unit, hence a lower number of units.
Performance measures:
Typical MLU per age group can be found in the following table, according to Roger Brown's five stages of syntactic and morphological development: Here are the steps for calculating MLU: Acquire a language sample of about 50-100 utterances Count the number of morphemes said by the child, then divide by the number of utterances The investigator can assess what stage of syntactic development the child is at, based on their MLUHere's an example of how to calculate MLU: In total there are 17 morphemes in this data set. In order to find the MLU, we divide the total number of morphemes (17) by the total number of utterances (4). In this particular data set, the mean length of utterance is 17/4 = 4.25.
Performance measures:
Clause density Clause density refers to the degree to which utterances contain dependent clauses. This density is calculated as a ratio of the total number of clauses across sentences, divide by the number of sentences in a discourse sample. For example, if the clause density is 2.0, the ratio would indicate that the sentence being analyzed has 2 clauses on average: one main clause and one subordinate clause.
Performance measures:
Here is an example of how clause density is measured, using T-units, adapted from Silliman & Wilkinson 2007: Indices of syntactic performance Indices track structures to show a more comprehensive picture of a person's syntactic complexity. Some examples of indices are Development Sentence Scoring, the Index of Productive Syntax and the Syntactic Complexity Measure.
Performance measures:
Developmental sentence scoring Developmental Sentence Scoring is another method to measure syntactic performance as a clinical tool. In this indice, each consecutive utterance, or sentence, elicited from a child is scored. This is a commonly applied measurement of syntax for first and second language learners, with samples gathered from both elicited and spontaneous oral discourse. Methods for eliciting speech for these samples come in many forms, such having the participant answering questions or re-telling a story. These elicited conversations are commonly tape-recorded for playback during analysis to see how well the person can incorporate syntax among other linguistic cues. For every utterance elicited, the utterance will receive one point if it is a correct form used in adult speech. A score of 1 indicates the least complex syntactic form in the category, whereas a higher score reflects higher level grammaticality. Points are specifically awarded to an utterance based on whether or not it contains any of the eight categories outlined below.Syntactic categories measured by developmental sentence scoring with examples: Indefinite pronouns 11a. Score of 1: it, this, that 11b. Score of 6: both, many, several, most, least Personal pronouns 12a. Score of 1: I, me, my, mine, you, your(s) 12b. Score of 6: Wh-pronouns (i.e. who, which, what, how) and wh-word + infinitive (i.e. I know what to do) Main verb 13a. Score of 1: Uninflected verb (i.e. I "see" you) and copula, is or 's (i.e. It's red) 13b. Score of 6: Must, shall + verb (i.e. He "must come" or We "shall see"), have + verb + '-en' (i.e. I have eaten) Secondary verb 14a. Score of 1: Infinitival complements (i.e. I wan"na see" = I want to see) 14b. Score of 6: Gerund (i.e. Swinging is fun) Negatives 15a. Score of 1: it, this or that + copula or auxiliary 'is' or 's + not (i.e. It's "not" mine) 15b. Score of 5: Uncontracted negative with 'have' (i.e. I have "not" eaten it), auxiliary'have'-negative contraction (i.e. I had"n't" eaten it), pronoun auxiliary 'have' contraction (i.e. I've "not" eaten it) Conjunctions 16a. Score of 1: and 16b. Score of 6: where, than, how Interrogative reversals 17a. Score of 1: Reversal of copula (i.e. "Is it" red?) 17b. Score of 5: Reversal with three auxiliaries (i.e. "Could he" have been going?) Wh-questions 18a. Score of 1: who or what (i.e. "What" do you mean?), what + noun (i.e. "What book" are you reading?) 18b. Score of 5: whose or which (i.e. "Which" do you want?), which + noun (i.e. "Which book" do you want?) In particular, those categories that appear the earliest in speech receive a lower score, whereas later-appearing categories receive a higher score. If an entire sentence is correct according to adult-like forms, then the utterance would receive an extra point. The eight categories above are the most commonly used structures in syntactic formation, thus structures such as possessives, articles, plurals, prepositional phrases, adverbs and descriptive adjectives were omitted and not scored. Additionally, the scoring system is arbitrary when applied to certain structures. For example, there is no indication as to why "if" would receive four points rather than five. The scores of all the utterances are totalled in the end of the analysis and then averaged to get a final score. This means that the individual's final score reflects their entire syntactic complexity level, rather than syntactic level in a specific category. The main advantage of development sentence scoring is that the final score represents the individual's general syntactic development and allows for easier tracking of changes in language development, making this tool effective for longitudinal studies.
Performance measures:
Index of productive syntax Similar to Development Sentence Scoring, the Index of Productive Syntax evaluates the grammatical complexity of spontaneous language samples. After age 3, Index of Productive Syntax becomes more widely used than MLU to measure syntactic complexity in children. This is because at around age 3, MLU does not distinguish between children of similar language competency as well as Index of Productive Syntax does. For this reason, MLU is initially used in early childhood development to track syntactic ability, then Index of Productive Syntax is used to maintain validity. Individual utterances in a discourse sample are scored based on the presence of 60 different syntactic forms, placed more generally under four subscales: noun phrase, verb phrase, question/negation and sentence structure forms. After a sample is recorded, a corpus is then formed based on 100 utterance transcriptions with 60 different language structures being measured in each utterance. Not included in the corpus are imitations, self-repetitions and routines, which constitute language that does not represent productive language usage. In each of the four sub-scales previously mentioned, the first two unique occurrences of a form are scored. After this, occurrences of a sub-scale are not scored. However, if a child has mastered a complex syntax structure earlier than expected, they will receive extra points.
Performance measures:
Standardized tests The six main tasks in standardized testing for syntax: What is the level of syntactic complexity? What specific syntactic structures are found? (a syntactic content analysis) Are specific structures representative of what is known about syntactic development within the age range of standardization sample? What are the processing requirements of the test format? (a task analysis) Are processing requirements similar to or different from language processing in more naturalistic contexts? Is syntactic ability in naturalistic language predicted by performance on the test?Some of the common standardized tests for measuring syntactic performance are the TOLD-2 Intermediate (Test of Language Development), the TOAL-2 (Test of Adolescent Language) and the CELF-R (Clinical Evaluation of Language Fundamentals, Revised Screening Test). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Development of the reproductive system**
Development of the reproductive system:
The development of the reproductive system is the part of embryonic growth that results in the sex organs and contributes to sexual differentiation. Due to its large overlap with development of the urinary system, the two systems are typically described together as the urogenital or genitourinary system.
Development of the reproductive system:
The reproductive organs develop from the intermediate mesoderm and are preceded by more primitive structures that are superseded before birth. These embryonic structures are the mesonephric ducts (also known as Wolffian ducts) and the paramesonephric ducts, (also known as Müllerian ducts). The mesonephric duct gives rise to the male seminal vesicles, epididymises and vas deferens. The paramesonephric duct gives rise to the female fallopian tubes, uterus, cervix, and upper part of the vagina.
Mesonephric ducts:
The mesonephric duct originates from a part of the pronephric duct.
Mesonephric ducts:
Origin In the outer part of the intermediate mesoderm, immediately under the ectoderm, in the region from the fifth cervical segment to the third thoracic segment, a series of short evaginations from each segment grows dorsally and extends caudally, fusing successively from before backward to form the pronephric duct. This continues to grow caudally until it opens into the ventral part of the cloaca; beyond the pronephros it is termed the mesonephric duct. Thus, the mesonephric duct remains after the atrophy of the pronephros duct.
Mesonephric ducts:
Development in male In the male the duct persists, and forms the tube of the epididymis, the vas deferens and the ejaculatory duct, while the seminal vesicle arises during the third month as a lateral diverticulum from its hinder end. A large part of the head end of the mesonephros atrophies and disappears; of the remainder the anterior tubules form the efferent ducts of the testicle; while the posterior tubules are represented by the ductuli aberrantes, and by the paradidymis, which is sometimes found in front of the spermatic cord above the head of the epididymis.
Mesonephric ducts:
Atrophy in female In the female the mesonephric bodies and ducts atrophy. The nonfunctional remains of the mesonephric tubules are represented by the epoophoron, and the paroöphoron, two small collections of rudimentary blind tubules which are situated in the mesosalpinx.
Remnants The lower part of the mesonephric duct disappears, while the upper part persists as the longitudinal duct of the epoöphoron, called Gartner's duct.
There are also developments of other tissues from the mesonephric duct that persist, e.g. the development of the suspensory ligament of the ovary.
Paramesonephric ducts:
Shortly after the formation of the mesonephric ducts a second pair of ducts is developed; these are the paramesonephric ducts. Each arises on the lateral aspect of the corresponding mesonephric duct as a tubular invagination of the cells lining the abdominal cavity. The orifice of the invagination remains open, and undergoes enlargement and modification to form the distal tubal opening (abdominal ostium) of the fallopian tube. The ducts pass backward lateral to the mesonephric ducts, but toward the posterior end of the embryo they cross to the medial side of these ducts, and thus come to lie side by side between and behind the latter—the four ducts forming what is termed the common genital cord, to distinguish it from the genital cords of the germinal epithelium seen later in this article. The mesonephric ducts end in an epithelial elevation, the sinus tubercle, on the ventral part of the cloaca between the orifices of the mesonephric ducts. At a later stage the sinus tubercle opens in the middle, connecting the paramesonephric ducts with the cloaca.
Paramesonephric ducts:
Atrophy in males In the male the paramesonephric ducts atrophy (but traces of their anterior ends are represented by the appendix of testis of the male), while their terminal fused portions form the prostatic utricle in the floor of the prostatic urethra. This is due to the production of Anti-Müllerian hormone by the Sertoli cells of the testes.
Development in females In the female the paramesonephric ducts persist and undergo further development. The portions which lie in the genital cord fuse to form the uterus and vagina. This fusion of the paramesonephric ducts begins in the third month, and the septum formed by their fused medial walls disappears from below upward.
The parts outside this cord remain separate, and each forms the corresponding Fallopian tube. The ostium of the fallopian tube remains from the anterior extremity of the original tubular invagination from the abdominal cavity.
Paramesonephric ducts:
About the fifth month a ring-like constriction marks the position of the cervix of the uterus, and after the sixth month the walls of the uterus begin to thicken. For a time the vagina is represented by a solid rod of epithelial cells. A ring-like outgrowth of this epithelium occurs at the lower end of the uterus and marks the future vaginal fornix. At about the fifth or sixth month the lumen of the vagina is produced by the breaking down of the central cells of the epithelium. The hymen represents the remains of the sinus tubercle .
Gonads:
The gonads are the precursors of the testes in males and ovaries in females. They initially develop from the mesothelial layer of the peritoneum.
Gonads:
Ovaries The ovary is differentiated into a central part, the medulla of ovary, covered by a surface layer, the germinal epithelium. The immature ova originate from cells from the dorsal endoderm of the yolk sac. Once they have reached the gonadal ridge they are called oogonia. Development proceeds and the oogonia become fully surrounded by a layer of connective tissue cells (pre-granulosa cells). In this way, the rudiments of the ovarian follicles are formed. The embryological origin of granulosa cells, on the other hand, remains controversial. Just as in the male, there is a gubernaculum in the female, which pulls it downward, albeit not as much as in males. The gubernaculum later becomes the proper ovarian ligament and the round ligament of the uterus.
Gonads:
Testes The periphery of the testes are converted into the tunica albuginea. Cords of the central mass run together and form a network which becomes the rete testis, and another network, which develops the seminiferous tubules. Via the rete testis, the seminiferous tubules become connected with outgrowths from the mesonephros, which form the efferent ducts of the testis.
Gonads:
In short, the descent of the testes consists of the opening of a connection from the testis to its final location at the anterior abdominal wall, followed by the development of the gubernaculum, which subsequently pulls and translocates the testis down into the developing scrotum. Ultimately, the passageway closes behind the testis. A failure in this process can cause indirect inguinal hernia or an infantile hydrocoele.
Division of cloaca:
After the separation of the rectum from the dorsal part of the cloaca, the ventral part becomes the primary urogenital sinus. The urogenital sinus, in turn, divides into the superficial definitive urogenital sinus and the deeper anterior vesico-urethral portion.
Definitive urogenital sinus The definitive urogenital sinus consists of a caudal cephalic portion and an intermediate narrow channel, the pelvic portion.
Division of cloaca:
Vesico-urethral portion The vesico-urethral portion is the deepest portion, continuous with the allantois. It absorbs the ends of the mesonephric ducts and the associated ends of the renal diverticula, and these give rise to the trigone of urinary bladder and part of the prostatic urethra. The remainder of the vesico-urethral portion forms the body of the bladder and part of the prostatic urethra; its apex is prolonged to the umbilicus as a narrow canal, the urachus, which later is obliterated and becomes the median umbilical ligament of the adult.
Prostate:
The prostate originally consists of two separate portions, each of which arises as a series of diverticular buds from the epithelial lining of the urogenital sinus and vesico-urethral part of the cloaca, between the third and fourth months. These buds become tubular, and form the glandular substance of the two lobes, which ultimately meet and fuse behind the urethra and also extend on to its ventral aspect. The median lobe of the prostate is formed as an extension of the lateral lobes between the common ejaculatory ducts and the bladder.
Prostate:
Skene's glands in the female urethra are regarded as the homologues of the prostatic glands.
The bulbourethral glands in the male, and Bartholin's gland in the female, also arise as diverticula from the epithelial lining of the urogenital sinus.
External genitalia:
Until about the ninth week of gestational age the external genitalia of males and females look the same, and follow a common development. This includes the development of a genital tubercle and a membrane dorsally to it, covering the developing urogenital opening, and the development of the labioscrotal fold, also called the urogenital fold, and the labioscrotal swelling.Even after differentiation can be seen between the sexes, some stages are common, e.g. the disappearing of the membrane. On the other hand, sex-dependent development include further protrusion of the genital tubercle in the male to form the glans of the penis and in the female, the clitoral glans. The urogenital fold evolves into the shaft of the penis in males and the shaft of the clitoris in females; the labioscrotal swelling evolves into the scrotum in males, and into the labia majora in females.
External genitalia:
Common development Before differentiation Urogenital membrane There is initially a cloacal membrane, composed of ectoderm and endoderm, reaching from the umbilical cord to the tail, separating the cloaca from the exterior. After the separation of the rectum from the dorsal part of the cloaca, the ventral part of the cloacal membrane becomes the urogenital membrane.
External genitalia:
Genital tubercle Mesoderm extends to the midventral line for some distance behind the umbilical cord, and forms the lower part of the abdominal wall; it ends below in a prominent swelling, the cloacal tubercle, which after the separation of the rectum becomes the genital tubercle. Dorsally to this tubercle the sides are not really fused. Rather, the urogenital part of the cloacal membrane separates the ingrowing sheets of mesoderm.
External genitalia:
Phallus The genital tubercle develops into the primordial phallus, the first rudiment of the penis or clitoris.The terminal part of the phallus, representing the future glans becomes solid. The remainder of the phallus, which remains hollow, is converted into a longitudinal groove by the absorption of the urogenital membrane.
External genitalia:
The term genital tubercle, however, still remains, but only refers to the future glans Urogenital opening In both sexes the phallic portion of the urogenital sinus extends on to the under surface of the cloacal tubercle as far forward as the apex. At the apex the walls of the phallic portion come together and fuse, obliterating the urogenital opening. Instead, a solid plate, the urethral plate, is formed. The remainder of the phallic portion is for a time tubular, and then, by the absorption of the urogenital membrane, it establishes a communication with the exterior. This opening is for a while the primitive urogenital opening, and it extends forward to the corona glandis.
External genitalia:
After differentiation The following developments occur in both males and females, although a difference in the development between the sexes already can be seen: The corpus cavernosum penis, and the corpus cavernosum of clitoris, and the corpus spongiosum penis arise from the mesodermal tissue in the phallus; they are at first dense structures, but later vascular spaces appear in them, and they gradually become cavernous.
External genitalia:
The prepuce in both sexes is formed by the growth of a solid plate of ectoderm into the superficial part of the phallus; on coronal section this plate presents the shape of a horseshoe. By the breaking down of its more centrally situated cells the plate is split into two lamellæ. Thus, a cutaneous fold, the prepuce, is liberated and forms a hood over the glans.
External genitalia:
Female In the female, a deep groove forms around the phallus. The sides of it grow dorsalward as the labioscrotal folds, which ultimately form the labia majora in females. The labia minora, in contrast, arise by the continued growth of the lips of the groove on the under surface of the phallus; the remainder of the phallus forms the clitoral glans.
External genitalia:
Male The labioscrotal folds extend around between the pelvic portion and the anus, and form a scrotal area. During the changes associated with the descent of the testes this scrotal area is drawn out to form the scrotal sacs. The penis is developed from the phallus.
As in the female, the urogenital membrane undergoes absorption, forming a channel on the under surface of the phallus; this channel extends only as far forward as the corona glandis.
External genitalia:
Urogenital opening Later, this opening, which is located on the dorsal side of the penis, closes from behind forward. Meanwhile, the urethral plate of the glans breaks down centrally to form a median groove continuous with the primitive ostium. This groove also closes from behind forward, leaving only a small pipe running in the middle of the penis. Thus, the urogenital opening is shifted forward to the end of the glans.
Diagram of internal differentiation:
A.—Diagram of the primitive urogenital organs in the embryo previous to sexual distinction. 3. Ureter.
4. Urinary bladder.
5. Urachus.
cl. Cloaca.
cp. Elevation which becomes clitoris or penis.
i. Lower part of the intestine.
ls. Fold of integument from which the labia majora or scrotum are formed.
m, m. Right and left Müllerian ducts uniting together and running with the Wolffian ducts in gc, the genital cord.
ot. The genital ridge from which either the ovary or testis is formed.
ug. Sinus urogenitalis.
W. Left Wolffian body.
w, w. Right and left Wolffian ducts.B.—Diagram of the female type of sexual organs. C. Greater vestibular gland, and immediately above it the urethra.
cc. Corpus cavernosum clitoridis.
dG. Remains of the left Wolffian duct, such as give rise to the duct of Gärtner, represented by dotted lines; that of the right side is marked w.
f. The abdominal opening of the left uterine tube.
g. Round ligament, corresponding to gubernaculum.
h. Situation of the hymen.
i. Lower part of the intestine.
l. Labium majus.
n. Labium minus.
o. The left ovary.
po. Epoophoron.
sc. Corpus cavernosum urethrae.
u. Uterus. The uterine tube of the right side is marked m.
v. Vulva.
va. Vagina.
W. Scattered remains of Wolffian tubes near it (paroöphoron of Waldeyer).C.—Diagram of the male type of sexual organs. C. Bulbo-urethral gland of one side.
cp. Corpora cavernosa penis cut short.
e. Caput epididymis.
g. The gubernaculum.
i. Lower part of the intestine.
m. Müllerian duct, the upper part of which remains as the hydatid of Morgagni; the lower part, represented by a dotted line descending to the prostatic utricle, constitutes the occasionally existing cornu and tube of the uterus masculinus.
pr. The prostate.
s. Scrotum.
sp. Corpus cavernosum urethrae.
t. Testis in the place of its original formation.
t’, together with the dotted lines above, indicates the direction in which the testis and epididymis descend from the abdomen into the scrotum.
vd. Ductus deferens.
vh. Ductus aberrans.
vs. The vesicula seminalis.
W. Scattered remains of the Wolffian body, constituting the organ of Giraldès, or the paradidymis of Waldeyer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BestCrypt**
BestCrypt:
BestCrypt, developed by Jetico, is a commercial disk encryption app available for Windows, Linux, macOS and Android.BestCrypt comes in two editions: BestCrypt Volume Encryption to encrypt entire disk volumes; BestCrypt Container Encryption to encrypt virtual disks stored as computer files.BestCrypt also provides the complimentary data erasure utility BCWipe.
Cryptographic Algorithms:
BestCrypt supports a wide variety of block cipher algorithms including AES, Serpent, Blowfish, Twofish, DES, Triple DES, GOST 28147-89. All ciphers support CBC and LRW modes of operation while AES, Twofish and Serpent also support XTS mode.
Features:
Create and mount a virtual drive encrypted using AES, Blowfish, Twofish, CAST-128 and various other encryption methods. BestCrypt v.8 and higher can alternatively mount a subfolder on a NTFS disk instead of a drive. Encrypted virtual disk images are compatible across Windows, Linux and Mac OS X.
Encrypt a set of files into a single, self-extracting archive.
Transparently encrypt entire partitions or volumes together with pre-boot authentication for encrypted boot partitions.
Two-factor authentication.
Support for size-efficient Dynamic Containers with the Smart Free Space Monitoring technology.
Hardware accelerated encryption.
Anti-keylogging facilities to protect container and volume passwords.
Data erasure utility BCWipe to erase unprotected copies of data to complement encryption.
Secret sharing and Public Key authentication methods in addition to basic password-based authentication. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Canine pancreatitis**
Canine pancreatitis:
Canine pancreatitis is inflammation of the pancreas that can occur in two very different forms. Acute pancreatitis is sudden, while chronic pancreatitis is characterized by recurring or persistent form of pancreatic inflammation. Cases of both can be considered mild or severe.
Background:
The pancreas is composed of two sections: the smaller endocrine portion, which is responsible for producing hormones such as insulin, somatostatin, and glucagon, and the larger, exocrine portion, which produces enzymes needed for the digestion of food. Acinar cells make up 82% of the total pancreas; these cells are responsible for the production of the digestive enzymes.
Pathophysiology:
Pancreatitis is caused by autodigestion of the pancreas thought to begin with an increase in secretion of pancreatic enzymes in response to a stimulus, which can be any source from table scraps to getting into the garbage to drugs, toxins, and trauma. The digestive enzymes are released too quickly and begin acting on the pancreas instead of the food they normally digest. Once the process cascades, inflammatory mediators and free radicals are released and pancreatitis develops, causing amplification of the process.
Clinical signs:
The clinical signs can vary from mild gastrointestinal upset to death, with most dogs presenting with common gastrointestinal signs of upset, such as vomiting, anorexia, painful abdomen, hunched posture, diarrhea, fever, dehydration, and lack of energy, with vomiting being the most common symptom. These signs are not specific just for pancreatitis and may be associated with other gastrointestinal diseases and conditions.Acute pancreatitis can trigger a build-up of fluid, particularly in abdominal and thoracic (chest) areas, acute kidney injury, and cause inflammation in arteries and veins. The inflammation triggers the body's clotting factors, possibly depleting them to the point of spontaneous bleeding. This form can be fatal in animals and in humans.Chronic pancreatitis can be present though no clinical signs of the disease are seen.Pancreatitis can result in exocrine pancreatic insufficiency, if the organ's acinar cells are permanently damaged; the pancreatic enzymes then need replacement with pancrelipase or similar products. The damage can also extend into the endocrine portion of the pancreas, resulting in diabetes mellitus. Whether the diabetes is transient (temporary) or permanent depends on the severity of the damage to the endocrine pancreas beta cells.
Risk factors:
Although various causes of dog pancreatitis are known, such as drugs, fatty diet, trauma, etc., the pathophysiology is very complex. Pancreatitis can be idiopathic; no real causation factor can be found. Obese animals as well as animals fed a diet high in fat may be more prone to developing acute and chronic pancreatitis. Certain breeds of dogs are considered predisposed to developing pancreatitis including Miniature Schnauzers, Cocker Spaniels, and some terrier breeds. Miniature Schnauzers as a breed tend toward developing hyperlipidemia, an excess of circulating fats in the blood. The breed that appears to be at risk for the acute form of pancreatitis is the Yorkshire Terrier, while Labrador Retrievers and Miniature Poodles seem to have a decreased risk for the acute form of the disease. Genetics may play a part in the risk factor. Dogs suffering from diabetes mellitus, Cushing's disease (hyperadrenocorticism), hypothyroidism, and epilepsy are at increased risk for pancreatitis. Diabetes and hypothyroidism are also associated with hyperlipidemia. Those with other types of gastrointestinal conditions and dogs that have had previous pancreatitis attacks are also at increased risk for the disorder.
Treatment:
No treatments for canine pancreatitis have been approved. Treatment for this disease is supportive, and may require hospitalization to attend to the dog's nutritional and fluid needs, pain management, and addressing any other disease processes (infection, diabetes, etc.) while letting the pancreas heal on its own. Treatment often involves "resting" the pancreas for a short period of time by which the patient receives no food or fluids by mouth, but is fed and hydrated by intravenous fluids and a feeding tube. Dehydration is also managed by the use of fluid therapy. However, a specialist from Texas A&M University has stated, "There is no evidence whatsoever that withholding food has any beneficial effect." Other specialists have agreed with his opinion.Canine pancreatitis is complex, often limiting the ability to approach the disease.
Postpancreatitis management:
A low-fat diet is indicated. The use of drugs that are known to have an association with pancreatitis should be avoided. Some patients benefit from the use of pancreatic enzymes on a supplemental basis. One study indicated that 57% dogs followed for six months after an acute pancreatitis attack, either continued to exhibit inflammation of the organ or had decreased acinar cell function, though they had no pancreatitis symptoms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Jeanne (crater)**
Jeanne (crater):
Jeanne is an impact crater on Venus.
Jeanne (crater):
The distinctive triangular shape of the ejecta indicates that the impacting body probably hit obliquely, traveling from southwest to northeast. The crater is surrounded by dark material of two types. The dark area on the southwest side of the crater is covered by smooth (radar-dark) lava flows which have a strongly digitate contact with surrounding brighter flows. The very dark area on the northeast side of the crater is probably covered by smooth material such as fine-grained sediment. This dark halo is asymmetric, mimicking the asymmetric shape of the ejecta blanket. The dark halo may have been caused by an atmospheric shock or pressure wave produced by the incoming body. Jeanne crater also displays several outflow lobes on the northwest side. These flow-like features may have formed by fine-grained ejecta transported by a hot, turbulent flow created by the arrival of the impacting object. Alternatively, they may have formed by flow of impact melt. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Eureqa**
Eureqa:
Eureqa is a proprietary modeling engine originally created by Cornell's Artificial Intelligence Lab and later commercialized by Nutonian, Inc. The software uses evolutionary search to determine mathematical equations that describe sets of data in their simplest form. This task is generally referred to as symbolic regression in the literature.
Origin and Development:
Since the 1970s, the primary way companies have performed data science has been to hire teams of data scientists, and equip them with tools like R, Python, SAS, and SQL to execute predictive and statistical modeling. In 2007, Michael Schmidt, then a PhD student in Computational Biology at Cornell, believed that the volume of data and complexity of problems that humans could solve were ever-increasing, and the number of data scientists was not. Instead of relying on more people to fill the data science gap, Schmidt and his advisor, Hod Lipson, invented Eureqa, believing machines could extract meaning from data automatically. Eureqa is an artificial intelligence-powered "Virtual Data Scientist" that automatically builds predictive and analytical models, and allows domain experts to rapidly iterate on them. TechCrunch has called Eureqa one of the first examples of Machine Intelligence – the subfield of A.I. that automates the discovery and explanation of answers from data.In early November 2009 the program was made available to download for free by anyone. Lipson described the machine's benefit in dealing with fields that are overwhelmed with data but lack theory to explain it. In the October 2011 edition of "Physical Biology", Lipson described a yeast experiment that predicted seven known equations. This took place after Lipson had asked scientists from different disciplines to share their work to test Eureqa's versatility.The program was named Eureqa after Archimedes' famous expression "Eureka!", with the k replaced by a q to evoke the word equation.
Technology:
Eureqa works by creating random equations with the data through evolutionary search. Most of the equations do not fit the data well, but a few of the equations will fit the data better than the others and those will be used as the basis of a new round of several billion more equations until a sufficiently good fit is reached. This has been used to discover formula with "invariant relationships", such as laws of nature.
Reception and Use:
As of 2015, over 80,000 people, including researchers, students, and Fortune 500 companies have made use of the program. People have used the application for many uses, such as analyzing the herding of cattle and understanding the behavior of the stock market. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Truncated 5-cell**
Truncated 5-cell:
In geometry, a truncated 5-cell is a uniform 4-polytope (4-dimensional uniform polytope) formed as the truncation of the regular 5-cell.
There are two degrees of truncations, including a bitruncation.
Truncated 5-cell:
The truncated 5-cell, truncated pentachoron or truncated 4-simplex is bounded by 10 cells: 5 tetrahedra, and 5 truncated tetrahedra. Each vertex is surrounded by 3 truncated tetrahedra and one tetrahedron; the vertex figure is an elongated tetrahedron.
Construction The truncated 5-cell may be constructed from the 5-cell by truncating its vertices at 1/3 of its edge length. This transforms the 5 tetrahedral cells into truncated tetrahedra, and introduces 5 new tetrahedral cells positioned near the original vertices.
Structure The truncated tetrahedra are joined to each other at their hexagonal faces, and to the tetrahedra at their triangular faces.
Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Projections The truncated tetrahedron-first Schlegel diagram projection of the truncated 5-cell into 3-dimensional space has the following structure: The projection envelope is a truncated tetrahedron.
One of the truncated tetrahedral cells project onto the entire envelope.
One of the tetrahedral cells project onto a tetrahedron lying at the center of the envelope.
Four flattened tetrahedra are joined to the triangular faces of the envelope, and connected to the central tetrahedron via 4 radial edges. These are the images of the remaining 4 tetrahedral cells.
Truncated 5-cell:
Between the central tetrahedron and the 4 hexagonal faces of the envelope are 4 irregular truncated tetrahedral volumes, which are the images of the 4 remaining truncated tetrahedral cells.This layout of cells in projection is analogous to the layout of faces in the face-first projection of the truncated tetrahedron into 2-dimensional space. The truncated 5-cell is the 4-dimensional analogue of the truncated tetrahedron.
Truncated 5-cell:
Images Alternate names Truncated pentatope Truncated 4-simplex Truncated pentachoron (Acronym: tip) (Jonathan Bowers) Coordinates The Cartesian coordinates for the vertices of an origin-centered truncated 5-cell having edge length 2 are: More simply, the vertices of the truncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,0,1,2) or of (0,1,2,2,2). These coordinates come from positive orthant facets of the truncated pentacross and bitruncated penteract respectively.
Truncated 5-cell:
Related polytopes The convex hull of the truncated 5-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 60 cells: 10 tetrahedra, 20 octahedra (as triangular antiprisms), 30 tetrahedra (as tetragonal disphenoids), and 40 vertices. Its vertex figure is a hexakis triangular cupola.
Vertex figure
Bitruncated 5-cell:
The bitruncated 5-cell (also called a bitruncated pentachoron, decachoron and 10-cell) is a 4-dimensional polytope, or 4-polytope, composed of 10 cells in the shape of truncated tetrahedra.
Topologically, under its highest symmetry, [[3,3,3]], there is only one geometrical form, containing 10 uniform truncated tetrahedra. The hexagons are always regular because of the polychoron's inversion symmetry, of which the regular hexagon is the only such case among ditrigons (an isogonal hexagon with 3-fold symmetry).
E. L. Elte identified it in 1912 as a semiregular polytope.
Each hexagonal face of the truncated tetrahedra is joined in complementary orientation to the neighboring truncated tetrahedron. Each edge is shared by two hexagons and one triangle. Each vertex is surrounded by 4 truncated tetrahedral cells in a tetragonal disphenoid vertex figure.
Bitruncated 5-cell:
The bitruncated 5-cell is the intersection of two pentachora in dual configuration. As such, it is also the intersection of a penteract with the hyperplane that bisects the penteract's long diagonal orthogonally. In this sense it is a 4-dimensional analog of the regular octahedron (intersection of regular tetrahedra in dual configuration / tesseract bisection on long diagonal) and the regular hexagon (equilateral triangles / cube). The 5-dimensional analog is the birectified 5-simplex, and the n -dimensional analog is the polytope whose Coxeter–Dynkin diagram is linear with rings on the middle one or two nodes.
Bitruncated 5-cell:
The bitruncated 5-cell is one of the two non-regular convex uniform 4-polytopes which are cell-transitive. The other is the bitruncated 24-cell, which is composed of 48 truncated cubes.
Symmetry This 4-polytope has a higher extended pentachoric symmetry (2×A4, [[3,3,3]]), doubled to order 240, because the element corresponding to any element of the underlying 5-cell can be exchanged with one of those corresponding to an element of its dual.
Bitruncated 5-cell:
Alternative names Bitruncated 5-cell (Norman W. Johnson) 10-cell as a cell-transitive 4-polytope Bitruncated pentachoron Bitruncated pentatope Bitruncated 4-simplex Decachoron (Acronym: deca) (Jonathan Bowers) Images Coordinates The Cartesian coordinates of an origin-centered bitruncated 5-cell having edge length 2 are: More simply, the vertices of the bitruncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,1,2,2). These represent positive orthant facets of the bitruncated pentacross. Another 5-space construction, centered on the origin are all 20 permutations of (-1,-1,0,1,1).
Related polytopes:
The bitruncated 5-cell can be seen as the intersection of two regular 5-cells in dual positions. = ∩ .
Configuration Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Related polytopes:
Related regular skew polyhedron The regular skew polyhedron, {6,4|3}, exists in 4-space with 4 hexagonal around each vertex, in a zig-zagging nonplanar vertex figure. These hexagonal faces can be seen on the bitruncated 5-cell, using all 60 edges and 30 vertices. The 20 triangular faces of the bitruncated 5-cell can be seen as removed. The dual regular skew polyhedron, {4,6|3}, is similarly related to the square faces of the runcinated 5-cell.
Related polytopes:
Disphenoidal 30-cell The disphenoidal 30-cell is the dual of the bitruncated 5-cell. It is a 4-dimensional polytope (or polychoron) derived from the 5-cell. It is the convex hull of two 5-cells in opposite orientations.
Being the dual of a uniform polychoron, it is cell-transitive, consisting of 30 congruent tetragonal disphenoids. In addition, it is vertex-transitive under the group Aut(A4).
Related polytopes These polytope are from a set of 9 uniform 4-polytope constructed from the [3,3,3] Coxeter group. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adrenergic**
Adrenergic:
Adrenergic means "working on adrenaline (epinephrine) or noradrenaline (norepinephrine)" (or on their receptors). When not further qualified, it is usually used in the sense of enhancing or mimicking the effects of epinephrine and norepinephrine in the body.
Adrenergic:
Adrenergic nervous system, a part of the autonomic nervous system that uses epinephrine or norepinephrine as its neurotransmitterRegarding proteins: Adrenergic receptor, a receptor type for epinephrine and norepinephrine; subtypes include α1, α2, β1, β2, and β3 receptors Adrenergic transporter (norepinephrine transporter), a protein transporting norepinephrine from the synaptic cleft into nerve cellsRegarding pharmaceutical drugs: Adrenergic receptor agonist, a type of drug activating one or more subtypes of adrenergic receptors This includes drugs regulating blood pressure and antiasthmatic drugs.
Adrenergic:
Adrenergic receptor antagonist, a type of drug blocking one or more subtypes of adrenergic receptors This mainly includes drugs lowering blood pressure.
Adrenergic reuptake inhibitor, a type of drug blocking the norepinephrine transporter This includes antidepressants and drugs against ADHD. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hierarchical clustering of networks**
Hierarchical clustering of networks:
Hierarchical clustering is one method for finding community structures in a network. The technique arranges the network into a hierarchy of groups according to a specified weight function. The data can then be represented in a tree structure known as a dendrogram. Hierarchical clustering can either be agglomerative or divisive depending on whether one proceeds through the algorithm by adding links to or removing links from the network, respectively. One divisive technique is the Girvan–Newman algorithm.
Algorithm:
In the hierarchical clustering algorithm, a weight Wij is first assigned to each pair of vertices (i,j) in the network. The weight, which can vary depending on implementation (see section below), is intended to indicate how closely related the vertices are. Then, starting with all the nodes in the network disconnected, begin pairing nodes in order of decreasing weight between the pairs (in the divisive case, start from the original network and remove links in order of decreasing weight). As links are added, connected subsets begin to form. These represent the network's community structures. The components at each iterative step are always a subset of other structures. Hence, the subsets can be represented using a tree diagram, or dendrogram. Horizontal slices of the tree at a given level indicate the communities that exist above and below a value of the weight.
Weights:
There are many possible weights for use in hierarchical clustering algorithms. The specific weight used is dictated by the data as well as considerations for computational speed. Additionally, the communities found in the network are highly dependent on the choice of weighting function. Hence, when compared to real-world data with a known community structure, the various weighting techniques have been met with varying degrees of success.
Weights:
Two weights that have been used previously with varying success are the number of node-independent paths between each pair of vertices and the total number of paths between vertices weighted by the length of the path. One disadvantage of these weights, however, is that both weighting schemes tend to separate single peripheral vertices from their rightful communities because of the small number of paths going to these vertices. For this reason, their use in hierarchical clustering techniques is far from optimal.Edge betweenness centrality has been used successfully as a weight in the Girvan–Newman algorithm. This technique is similar to a divisive hierarchical clustering algorithm, except the weights are recalculated with each step. The change in modularity of the network with the addition of a node has also been used successfully as a weight. This method provides a computationally less-costly alternative to the Girvan-Newman algorithm while yielding similar results. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The Game (mind game)**
The Game (mind game):
The Game is a mind game in which the objective is to avoid thinking about The Game itself. Thinking about The Game constitutes a loss, which must be announced each time it occurs. It is impossible to win most versions of The Game. Depending on the variation, it is held that the whole world, or all those who are aware of the game, are playing it at all times. Tactics have been developed to increase the number of people who are aware of The Game, and thereby increase the number of losses.
Origin:
The origins of The Game are uncertain. The most common hypothesis as is that The Game derives from another mental game, Finchley Central. While the original version of Finchley Central involves taking turns to name stations, in 1976 some members of the Cambridge University Science Fiction Society (CUSFS) developed a variant where the first person to think of the titular station loses. The game in this form demonstrates ironic processing, in which attempts to suppress or avoid certain thoughts make those thoughts more common or persistent than they would be at random.How this became simplified into The Game is unknown; one hypothesis is that once it spread outside the Greater London area, among people who are less familiar with London stations, it morphed into its self-referential form. The creators of "LoseTheGame.net", a website which aims to catalogue information relating to the phenomenon, have received messages from multiple former members of the CUSFS commenting on the similarity between the Finchley Central variant and the modern Game. The first known reference to The Game is a blog post from 2002 – the author states that they "found out about it online about 6 months ago".The Game is most commonly spread through the internet, such as via Facebook or Twitter, or by word of mouth.
Gameplay:
There are three commonly reported rules to The Game:Everyone in the world is playing The Game. (This is alternatively expressed as, "Everybody in the world who knows about The Game is playing The Game" or "You are always playing The Game.") A person cannot refuse to play The Game; it does not require consent to play and one can never stop playing.
Gameplay:
Whenever one thinks about The Game, one loses.
Gameplay:
Losses must be announced. This can be verbally, with a phrase such as "I just lost The Game", or in any other way: for example, via Facebook or other social media. Some people may have ways to remind others of The Game.The definition of "thinking about The Game" is not always clear. If one discusses The Game without realizing that they have lost, this may or may not constitute a loss. If someone says "What is The Game?" before understanding the rules, whether they have lost is up for interpretation. According to some interpretations, one does not lose when someone else announces their loss, although the second rule implies that one loses regardless of what made them think about The Game. After a player has announced a loss, or after one thinks of The Game, some variants allow for a grace period between three seconds to thirty minutes to forget about the game, during which the player cannot lose the game again.
Gameplay:
Strategies Strategies focus on making others lose The Game. Common methods include saying "The Game" out loud or writing about The Game on a hidden note, in graffiti in public places, or on banknotes.Associations may be made with The Game, especially over time, so that one thing inadvertently causes one to lose. Some players enjoy thinking of elaborate pranks that will cause others to lose the game.Other strategies involve merchandise: T-shirts, buttons, mugs, posters, and bumper stickers have been created to advertise The Game. The Game is also spread via social media websites such as Facebook and Twitter.
Gameplay:
Possible endings The common rules do not define a point at which The Game ends. However, some players state that The Game ends when the Prime Minister of the United Kingdom announces on television that "The Game is up."The March 3, 2008 edition of the webcomic xkcd declares its reader the winner of the game, and therefore free from the game's "mindvirus."
Reception:
The Game has been described as challenging and fun to play, and as pointless, childish, and infuriating. In some Internet forums, such as Something Awful and GameSpy, and in several schools, The Game has been banned.The 2009 Time 100 poll was manipulated by users of 4chan, forming an acrostic for "marblecake also the game" out of the top 21 people's names. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advanced Programming in the Unix Environment**
Advanced Programming in the Unix Environment:
Advanced Programming in the Unix Environment is a computer programming book by W. Richard Stevens describing the application programming interface of the UNIX family of operating systems. The book illustrates UNIX application programming in the C programming language.
Advanced Programming in the Unix Environment:
The first edition of the book was published by Addison-Wesley in 1992. It covered programming for the two popular families of the Unix operating system, the Berkeley Software Distribution (in particular 4.3 BSD and 386BSD) and AT&T's UNIX System V (particularly SVR4). The book covers system calls for operations on single file descriptors, special calls like ioctl that operate on file descriptors, and operations on files and directories. It covers the stdio section of the C standard library, and other parts of the library as needed. The several chapters concern the APIs that control processes, process groups, daemons, inter-process communication, and signals. One chapter is devoted to the Unix terminal control and another to the pseudo terminal concept and to libraries like termcap and curses that build atop it. Stevens adds three chapters giving more concrete examples of Unix programming: he implements a database library, communicates with a PostScript printer, and with a modem. The book does not cover network programming: this is the subject of Stevens's 1990 book UNIX Network Programming and his subsequent three-volume TCP/IP Illustrated.
Advanced Programming in the Unix Environment:
Stevens died in 1999, leaving a second edition incomplete. With the increasing popularity and technical diversification of Unix derivatives, and largely compatible systems like the Linux environment, the code and coverage of Stevens's original became increasingly outdated. Working with Stevens's unfinished notes, Stephen A. Rago completed a second edition which Addison-Wesley published in 2005. This added support for FreeBSD, Linux, Sun's Solaris, and Apple's Darwin, and added coverage of multithreaded programming with POSIX Threads. The second edition features a foreword by Dennis Ritchie and a Unix-themed Dilbert strip by Scott Adams.
Advanced Programming in the Unix Environment:
The book has been widely lauded as well written, well crafted, and comprehensive. It received a "hearty recommendation" in a Linux Journal review.OSNews describes it as "one of the best tech books ever published" in a review of the second edition.
Editions:
Advanced Programming in the UNIX environment, first edition, W. Richard Stevens, Addison-Wesley, 1992, ISBN 978-0-201-56317-7 Advanced Programming in the UNIX environment, second edition, W. Richard Stevens and Stephen A. Rago, Addison-Wesley, 2005, ISBN 978-0-201-43307-4 Advanced Programming in the UNIX environment, third edition, W. Richard Stevens and Stephen A. Rago, Addison-Wesley, 2013, ISBN 978-0-321-63773-4 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Esophageal spasm**
Esophageal spasm:
Esophageal spasm is a disorder of motility of the esophagus.There are two types of esophageal spasm: Diffuse or distal esophageal spasm (DES), where there is uncoordinated esophageal contractions Nutcracker esophagus (NE) also known as hypertensive peristalsis, where the contractions are coordinated but with an excessive amplitude.Both conditions are linked to gastroesophageal reflux disease (GERD). DES and nutcracker esophagus present similarly and can may require esophageal manometry for differentiation.When the coordinated muscle contraction are irregular or uncoordinated, this condition may be called diffuse esophageal spasm. These spasms can prevent food from reaching the stomach where food gets stuck in the esophagus. At other times the coordinated muscle contraction is very powerful, which is called nutcracker esophagus. These contractions move food through the esophagus but can cause severe pain.
Signs and symptoms:
The symptoms may include trouble swallowing, regurgitation, chest pain, heartburn, globus pharyngis (which is a feeling that something is stuck in the throat) or a dry cough.
Causes:
It is not clear what causes esophageal spasms. Sometimes esophageal spasms start when someone eats hot or cold foods or drinks. However, they can also occur without eating or drinking. The increased release of acetylcholine may also be a factor, but the triggering event is not known. Spasms may also be the result of a food intolerance.
Diagnosis:
The diagnosis is generally confirmed by esophageal manometry. DES is present when more than a fifth of swallows results in distal esophageal contractions. NE is present if the average strength of the contractions of the distal esophagus is greater than 180 mmHg but the contraction of the esophagus is otherwise normal.
Differential diagnosis Often, symptoms that may suggest esophageal spasm are the result of another condition such as food intolerance, gastroesophageal reflux disease (GERD) or achalasia. The symptoms can commonly be mistaken as heart palpitations.
Treatment:
Since esophageal spasms are often associated with other disorders, management in these cases involve attempts to correct the underlying problem. Medications may include use of calcium channel blockers (CCBs) and nitrates. Tricyclic antidepressants (TCA) and sildenafil can be used as alternative treatment options. If caused by food allergy, an elimination diet may be necessary.
Procedures If medical therapy fails either botulinum toxin injection or surgical myotomy may be tried in distal esophageal spasms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pollinator exclusion experiment**
Pollinator exclusion experiment:
Pollinator exclusion experiments are experiments used by ecologists to determine the effectiveness of putative plant pollination vectors. Essentially, certain pollinators are prevented from visiting certain flowers, and observations are then made on which flowers develop seeds. If the exclusion of a certain class of visitor prevents or greatly reduces flower fertilisation rates, then it can be concluded that that class of visitor plays an important role in pollination.
Pollinator exclusion experiment:
There are various methods for excluding pollinators. A cage may exclude nectarivorous birds and mammals but allow access by insects. A net may exclude all but the smallest animals, yet permit wind-pollination. Insect repellent may prevent visits by insects whilst allowing access by birds and mammals. Bags may be used to prevent all but autogamous pollination. Bagging flowers only during the day or night makes it possible to exclude diurnal or nocturnal visitors respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AADAC**
AADAC:
Arylacetamide deacetylase is an enzyme that in humans is encoded by the AADAC gene.Microsomal arylacetamide deacetylase competes against the activity of cytosolic arylamine N-acetyltransferase, which catalyzes one of the initial biotransformation pathways for arylamine and heterocyclic amine carcinogens. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fluoride toxicity**
Fluoride toxicity:
Fluoride toxicity is a condition in which there are elevated levels of the fluoride ion in the body. Although fluoride is safe for dental health at low concentrations, sustained consumption of large amounts of soluble fluoride salts is dangerous. Referring to a common salt of fluoride, sodium fluoride (NaF), the lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride/kg body weight). Ingestion of fluoride can produce gastrointestinal discomfort at doses at least 15 to 20 times lower (0.2–0.3 mg/kg or 10 to 15 mg for a 50 kg person) than lethal doses. Although it is helpful topically for dental health in low dosage, chronic ingestion of fluoride in large amounts interferes with bone formation. In this way, the most widespread examples of fluoride poisoning arise from consumption of ground water that is abnormally fluoride-rich.
Recommended levels:
For optimal dental health, the World Health Organization recommends a level of fluoride from 0.5 to 1.0 mg/L (milligrams per liter), depending on climate. Fluorosis becomes possible above this recommended dosage. As of 2015, the United States Health and Human Services Department recommends a maximum of 0.7 milligrams of fluoride per liter of water – updating and replacing the previous recommended range of 0.7 to 1.2 milligrams issued in 1962. The new recommended level is intended to reduce the occurrence of dental fluorosis while maintaining water fluoridation.
Toxicity:
Chronic In India an estimated 60 million people have been poisoned by well water contaminated by excessive fluoride, which is dissolved from the granite rocks. The effects are particularly evident in the bone deformities of children. Similar or larger problems are anticipated in other countries including China, Uzbekistan, and Ethiopia.
Toxicity:
Acute Historically, most cases of acute fluoride toxicity have followed accidental ingestion of sodium fluoride based insecticides or rodenticides. Currently, in advanced countries, most cases of fluoride exposure are due to the ingestion of dental fluoride products. Other sources include glass-etching or chrome-cleaning agents like ammonium bifluoride or hydrofluoric acid, industrial exposure to fluxes used to promote the flow of a molten metal on a solid surface, volcanic ejecta (for example, in cattle grazing after an 1845–1846 eruption of Hekla and the 1783–1784 flood basalt eruption of Laki), and metal cleaners. Malfunction of water fluoridation equipment has happened several times, including a notable incident in Alaska.
Occurrence:
Organofluorine compounds Twenty percent of modern pharmaceuticals contain fluorine. These organofluorine compounds are not sources of fluoride poisoning, as the carbon–fluorine bond is too strong to release fluoride.
Occurrence:
Fluoride in toothpaste Children may experience gastrointestinal distress upon ingesting excessive amounts of flavored toothpaste. Between 1990 and 1994, over 628 people, mostly children, were treated after ingesting too much fluoride-containing toothpaste. "While the outcomes were generally not serious," gastrointestinal symptoms appear to be the most common problem reported. However given the low concentration of fluoride present in dental products, this is potentially due to consumption of other major components.
Occurrence:
Fluoride in drinking water Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are heavily contaminated with arsenic or fluoride. These trace elements derive mainly from leaching of minerals. Maps are available of locations of potential problematic wells via the Groundwater Assessment Platform (GAP).
Effects:
Excess fluoride consumption has been studied as a factor in the following: Brain Some research has suggested that high levels of fluoride exposure may adversely affect neurodevelopment in children, but the evidence is of insufficient quality to allow any firm conclusions to be drawn.
Effects:
Bones Whilst fluoridated water is associated with decreased levels of fractures in a population, toxic levels of fluoride have been associated with a weakening of bones and an increase in hip and wrist fractures. The U.S. National Research Council concludes that fractures with fluoride levels 1–4 mg/L, suggesting a dose-response relationship, but states that there is "suggestive but inadequate for drawing firm conclusions about the risk or safety of exposures at [2 mg/L]".: 170 Consumption of fluoride at levels beyond those used in fluoridated water for a long period of time causes skeletal fluorosis. In some areas, particularly the Asian subcontinent, skeletal fluorosis is endemic. It is known to cause irritable-bowel symptoms and joint pain. Early stages are not clinically obvious, and may be misdiagnosed as (seronegative) rheumatoid arthritis or ankylosing spondylitis.
Effects:
Kidney Fluoride induced nephrotoxicity is kidney injury due to toxic levels of serum fluoride, commonly due to release of fluoride from fluorine-containing drugs, such as methoxyflurane.Within the recommended dose, no effects are expected, but chronic ingestion in excess of 12 mg/day are expected to cause adverse effects, and an intake that high is possible when fluoride levels are around 4 mg/L.: 281 Those with impaired kidney function are more susceptible to adverse effects.: 292 The kidney injury is characterised by failure to concentrate urine, leading to polyuria, and subsequent dehydration with hypernatremia and hyperosmolarity. Inorganic fluoride inhibits adenylate cyclase activity required for antidiuretic hormone effect on the distal convoluted tubule of the kidney. Fluoride also stimulates intrarenal vasodilation, leading to increased medullary blood flow, which interferes with the counter current mechanism in the kidney required for concentration of urine.
Effects:
Fluoride induced nephrotoxicity is dose dependent, typically requiring serum fluoride levels exceeding 50 micromoles per liter (about 1 ppm) to cause clinically significant renal dysfunction, which is likely when the dose of methoxyflurane exceeds 2.5 MAC hours. (Note: "MAC hour" is the multiple of the minimum alveolar concentration (MAC) of the anesthetic used times the number of hours the drug is administered, a measure of the dosage of inhaled anesthetics.) Elimination of fluoride depends on glomerular filtration rate. Thus, patients with chronic kidney disease will maintain serum fluoride for longer period of time, leading to increased risk of fluoride induced nephrotoxicity.
Effects:
Teeth The only generally accepted adverse effect of fluoride at levels used for water fluoridation is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and usually only an aesthetic concern. Compared to unfluoridated water, fluoridation to 1 mg/L is estimated to cause fluorosis in one of every 6 people (range 4–21), and to cause fluorosis of aesthetic concern in one of every 22 people (range 13.6–∞).
Effects:
Thyroid Fluoride's suppressive effect on the thyroid is more severe when iodine is deficient, and fluoride is associated with lower levels of iodine. Thyroid effects in humans were associated with fluoride levels 0.05–0.13 mg/kg/day when iodine intake was adequate and 0.01–0.03 mg/kg/day when iodine intake was inadequate.: 263 Its mechanisms and effects on the endocrine system remain unclear.: 266 Testing on mice shows that the medication gamma-Aminobutyric acid (GABA) can be used to treat fluoride toxicity of the thyroid and return normal function.
Effects:
Effects on aquatic organisms Fluoride accumulates in the bone tissues of fish and in the exoskeleton of aquatic invertebrates. The mechanism of fluoride toxicity in aquatic organisms is believed to involve the action of fluoride ions as enzymatic poisons. In soft waters with low ionic content, invertebrates and fishes may develop adverse effects from fluoride concentration as low as 0.5 mg/L. Negative effects are less in hard waters and seawaters, as the bioavailability of fluoride ions is reduced with increasing water hardness Seawater contains fluoride at a concentration of 1.3 mg/L.
Mechanism:
Like most soluble materials, fluoride compounds are readily absorbed by the stomach and intestines, and excreted through the urine. Urine tests have been used to ascertain rates of excretion in order to set upper limits in exposure to fluoride compounds and associated detrimental health effects. Ingested fluoride initially acts locally on the intestinal mucosa, where it forms hydrofluoric acid in the stomach. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dynasty Warriors 6**
Dynasty Warriors 6:
Dynasty Warriors 6 (真・三國無双5, Shin Sangoku Musōu 5) is a hack and slash video game set in Ancient China, during a period called Three Kingdoms (around 200AD). This game is the sixth official installment in the Dynasty Warriors series, developed by Omega Force and published by Koei. The game was released on November 11, 2007 in Japan; the North American release was February 19, 2008 while the Europe release date was March 7, 2008. A version of the game was bundled with the 40GB PlayStation 3 in Japan. Dynasty Warriors 6 was also released for Windows in July 2008. A version for PlayStation 2 was released on October and November 2008 in Japan and North America respectively. An expansion, titled Dynasty Warriors 6: Empires was unveiled at the 2008 Tokyo Game Show and released in May 2009.
Gameplay:
This installment varies greatly from past games in the series. One of the game's key additions is the Renbu system, a new way for characters to build up their attack combos. In previous installments in the series, combos were affected by the quality of weapon the character was wielding, with more powerful weapons allowing characters longer, more elaborate and often more powerful consecutive attacks. The Renbu system replaces this system with a gauge that gradually fills as you perform attacks. Performing attacks and dealing damage to the enemy fills the Renbu gauge, eventually earning a new rank/level, while taking damage and not attacking for a time drops the gauge; if you take a lot of damage or go for a long time without inflicting damage then the gauge may even drop back down to the previous level. However even at Renbu Rank 1 characters will be able to perform non-ending combos on the enemy. Without unlocking Renbu Ranks 3 and Infinite from the skill tree though, players can only progress to Renbu Rank 2 (with the exception of the temporary Rank Infinity acquired temporarily by collecting a certain item on the battlefield).
Gameplay:
Another major addition is the skill tree, from which characters can earn higher Renbu Ranks, special abilities and improve their attributes. As the progression of the skill tree moves from left to right, those on the right side of the tree are harder to unlock than those on the left. Typically the one which unlocks Infinite Renbu is on the farthest right.
Gameplay:
Unique movesets for each character have been largely reduced. Only characters who have Musou Mode receive original movesets (with the exception of Diao Chan); the rest of the characters playable only in Free Mode have cloned movesets based on the Musou Mode characters with altered properties (with the exception of Xiao Qiao, who retains her fan moveset). Due to the addition of Renbu system, the traditional "fourth weapon" from previous games have been removed with the three normal weapons no longer being quality-based. Each weapons obtained have random stats and effects implemented and the "weight system" from previous game have been replaced by weapon categories; Standard (default type), Strength (greater attack power at the cost of Renbu Gauge being kept for a smaller amount of time), and Skill (greater attack speed with low attack power). In addition to the new weapon system, it is now possible to block from any direction, for example, if a character is attacked from behind while blocking, they will rotate their body with their weapon in front of them to guard against the enemy's attack. This eliminates the need to quickly stop blocking, change direction, and press the guard button again. Unlike previous games, horses can be found by obtaining saddles randomly dropped from boxes or beaten officers. These horses can gain levels, skills, and some can even change into the legendary Red Hare, although this is very rare.
Gameplay:
Musou Token which enables the use of Musou Rage have been removed. It is instead replaced by Tome item drop which allows the use of unique special attacks. There are five types of attacks; Swift Attack (increases the player's stats), Volley (launches waves of arrows), Fire (sets eruptions of fire), True Speed (boosts the player's speed), and Rockfall (launches giant boulders from above).
Gameplay:
Dueling from Dynasty Warriors 4 returns, but has been revamped; duels now take place on the battlefield and the nearby soldiers will circle around the two fighters, and other officers may jump into the circle, as opposed to the duel taking place in an arena that appears out of nowhere.
Gameplay:
Bases have been altered too; they are bigger and where, before, in order to open the outer gate to a base, the player had to defeat a defense captain, now they must simply break it down with attacks. There is also a new corporal unit which guards bases. Defeating troops and corporals within the base reduces the base's defense. When the defense of the base drops to zero, the player has claimed the base. However, defeating the corporal is worth defeating 20 troops while defeating the guard captain will automatically capture the base.
Gameplay:
Two new 'innovations' to the series are the abilities to swim and climb ladders. The ladder means that the player can now climb onto castle battlements in scenarios such as the Battle of Hu Lao Gate, and dispose of enemy ballistas and the new 'guard' unit. The first ties in with the improvements to enemy AI, allowing them to travel across rivers and other bodies of water in order to attack you or allied bases. Swimming is now a part of scenarios such as the Battle of Fan Castle.
Characters:
The original game features a total of 41 playable characters, a step-down from the previous installment in the series, which featured 48 playable characters. The seven removed characters are Da Qiao, Jiang Wei, Meng Huo, Pang De, Xing Cai, Zhu Rong, and Zuo Ci. Other than brief mentions in cutscenes and character biographies in-game, they otherwise do not make appearance in the game at all. Unlike previous games which featured Musou Modes for all characters, only seventeen of the playable characters received stories, while the others are playable only in Free Mode and Challenge Mode. Dynasty Warriors 6: Special adds Musou Mode for six more characters, while the PSP port of the game adds Meng Huo back to the roster, bringing the character count to 42.
Characters:
* Denotes characters added through expansion titlesBold denotes default characters
Reception:
The Xbox 360 and PlayStation 3 versions of the game received "mixed" reviews and the PlayStation 2 version received "unfavorable" reviews, according to video game review aggregator Metacritic. In Japan, Famitsu gave the PlayStation 3 and Xbox 360 versions a score of one eight, one nine, and two eights for a total of 33 out of 40.GameSpot nominated Dynasty Warriors 6 for 'least improved sequel' in their 2008 award show.Ryan Clements of IGN said of the Xbox 360 version, "Dynasty Warriors 6 is not a good looking game, and it performs even worse on the PS3 than on the 360 (even when you opt to install the game data)." He did note that "Dynasty Warriors 6 does have a number of cool things to note. The amount of leveling up you can do is fairly impressive and each character's campaign takes at least a few hours to work through, providing you with quite a lot of content (despite the repetition)."Amanda L. Kondolojy of CheatCodeCentral gave the game one of its better reviews, scoring it at 3.4/5. Kondolojy enjoyed the game, saying that "One aspect of Dynasty Warriors 6 Empires that was surprisingly fun to tinker with was the character creator. Although DW5 Empires had a warlord creator, DW6 Empires gives you more creative control over your newly-made character. In addition to having a wide variety of costumes and customizable features, you can also integrate your character into the main Empire mode as a vagrant, and can work your way up to become leader of the land."
Expansions:
PlayStation 2 and PlayStation Portable versions Dynasty Warriors 6 (真・三國無双5: Special) was released on October 2, 2008 on the PlayStation 2 in Japan and November 17, 2008 in North America. In this game, Musou modes for Ma Chao, Yue Ying, Cao Pi, Zhang He, Taishi Ci, and Ling Tong were added, and those six characters received new weapons and movesets (rather than being clones). There are also five new stages introduced in this game. The swimming and dueling abilities were removed, however. The graphics are also significantly reduced and the game suffers from heavy slowdown, most likely due to the memory capabilities.
Expansions:
This version of the game was also released to the PlayStation Portable on September 17, 2009. Likely to coincide with the inclusion of Meng Huo in the Empires expansion, he was additionally added as a Free Mode character in this game.
Expansions:
Dynasty Warriors 6: Empires Dynasty Warriors 6: Empires was released May 11, 2009 in Japan, June 23, 2009 for North America and June 26 in Europe for PlayStation 3 and Xbox 360. Like all other Empires expansions, the basic premise of the game is to become a leader whose goal is to conquer and maintain all regions of China. However, the player can additionally become a Vagrant (unaligned wanderer) or a vassal serving a lord, in addition to becoming a ruler. The player can step down from any force at any time, betraying their liege, or defecting to another force. The player can also make oaths of friendship with fellow officers and marry other characters.
Expansions:
The level-up system for weapons similar to Dynasty Warriors 4 is introduced. The player can equip various skills and abilities to the weapons. The Renbu system also returns, although it is now merely an element determined by the character's weapons Meng Huo, who was originally cut from the original game returns with new weapons as well as seven new stages. The game also kept all character changes and new stages exclusive to Dynasty Warriors 6 (PS2/PSP Version).
Expansions:
The Create Character option from Dynasty Warriors 4 returns and greatly revamped; players are given much more freedom in creating characters and the player can create up to 100 characters. Free Mode have been cut from this game however, as the game opted for a more full and rounded Empire Mode. Additionally, the game supports Downloadable Content which mainly includes new costumes for edit characters and music.
Expansions:
Dynasty Warriors 6: Empires received "mixed" reviews on both platforms according to video game review aggregator Metacritic.Kevin VanOrd of GameSpot said that "The combat is still dreadfully repetitive," "The visuals are still ugly," and "The sound effects and voice acting are still awful." VanOrd went on to say of Empires, "Environments are bland and lifeless; water looks awful; and character models, while clearly upgraded from Dynasty Warriors 5 Empires, still look primitive by today's standards," and gave it 5.5/10 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ISO/IEC 15693**
ISO/IEC 15693:
ISO/IEC 15693, is an ISO/IEC standard for vicinity cards, i.e. cards which can be read from a greater distance as compared with proximity cards. Such cards can normally be read out by a reader without being powered themselves, as the reader will supply the necessary power to the card over the air (wireless).
ISO/IEC 15693 systems operate at the 13.56 MHz frequency, and offer maximum read distance of 1–1.5 meters. As the vicinity cards have to operate at a greater distance, the necessary magnetic field is less (0.15 to 5 A/m) than that for a proximity card (1.5 to 7.5 A/m).
Example applications:
Ski pass: each of those has a unique ID and the system knows for how long the pass is valid etc.
Communication to the card:
Communication from the reader to the card uses an amplitude-shift keying with 10% or 100% modulation index. The data coding is: 1 out of 4 pulse-position modulation 2 bits are coded as the position of a 9.44 μs pause in a 75.52 μs symbol time, giving a bit rate of 26.48 kilobits per second. The least-significant bits are sent first.
Communication to the card:
1 out of 256 pulse-position modulation 8 bits are coded as the position of a 9.44 μs pause in a 4.833 ms symbol time, giving a bit rate of 1.65 kbit/s.
Communication to the reader:
The card has two ways to send its data back to the reader: Amplitude-shift keying Amplitude-shift keying 100% modulation index on a 423.75 kHz subcarrier. The data rate can be: Low 6.62 kbit/s (fc/2048) High 26.48 kbit/s (fc/512)A logic 0 starts with eight pulses of 423.75 kHz followed by an unmodulated time of 18.88 μs (256/ fc); a logic 1 is the other way round. The data frame delimiters are code violations, a start of frame is: an unmodulated time of 56.64 μs (768/ fc), 24 pulses of 423.75 kHz a logic 1and the end of a frame is: a logic 0 24 pulses of 423.75 kHz an unmodulated time of 56.64 μsThe data are sent using a Manchester code.
Communication to the reader:
Frequency-shift keying Frequency-shift keying by switching between a 423.75 kHz sub carrier (operating frequency divided by 32) and a 484.25 kHz sub carrier (operating frequency divided by 28). The data rate can be: Low 6.67 kbit/s (fc/2032) High 26.69 kbit/s (fc/508)A logic 0 starts with eight pulses of 423.75 kHz followed by nine pulses of 484.28 kHz; a logic 1 is the other way round. The data frame delimiters are code violations, a start of frame is: 27 pulses of 484.28 kHz 24 pulses of 423.75 kHz a logic 1and the end of a frame is: a logic 0 24 pulses of 423.75 kHz 27 pulses of 484.28 kHzThe data are sent using a Manchester code.
Manufacturer codes:
see ISO/IEC 7816-6 Code 0x01: Motorola Code 0x02: ST Microelectronics Code 0x03: Hitachi Code 0x04: NXP Semiconductors Code 0x05: Infineon Technologies Code 0x06: Cylinc Code 0x07: Texas Instruments Tag-it Code 0x08: Fujitsu Limited Code 0x09: Matsushita Electric Industrial Code 0x0A: NEC Code 0x0B: Oki Electric Code 0x0C: Toshiba Code 0x0D: Mitsubishi Electric Code 0x0E: Samsung Electronics Code 0x0F: Hyundai Electronics Code 0x10: LG Semiconductors Code 0x12: WISeKey Code 0x16: EM Microelectronic-Marin Code 0x1F: Melexis Code 0x2B: Maxim Integrated Code 0x33: AMIC Code 0x39: Silicon Craft Technology Code 0x44: GenTag, Inc (USA) Code 0x45: Invengo Information Technology Co.Ltd
Implementations:
The first byte of the UID should always be 0xE0.
Products with ISO/IEC 15693 interface:
EEPROM: various manufacturers like ST Microelectronics or NXP offer EEPROMs readable via ISO/IEC 15693.
μController: Texas Instruments offers a small μController entirely powered by the ISO/IEC 15693 reading field and capable of reading a simple temperature sensor, wirelessly providing the value of that to the reader. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Folding (DSP implementation)**
Folding (DSP implementation):
Folding is a transformation technique using in DSP architecture implementation for minimizing the number of functional blocks in synthesizing DSP architecture.
Folding was first developed by Keshab K. Parhi and his students in 1992.
Its concept is contrary to unfolding.
Folding transforms an operation from a unit-time processing to N unit-times processing where N is called folding factor.
Therefore, multiple same operations (less than N) used in original system could be replaced with a signal operation block in transformed system.
Thus, in N unit-times, a functional block in transformed system could be reused to perform N operations in original system.
While the folding transformation reduces the number of functional units in the architecture, it needs more memory element to store the temporary data.
The reason is that multiple data produced from an operation block needs to be distinguished from N data produced from original operations.
Therefore, the number of register may be increased.
Furthermore, it needs additional multiplexer for switching different operation paths.
Hence, the number of switching elements would also be increased.
To counterattack such issues, the considerations of folding is How to schedule multiple operations into an operation block.
How to schedule the memory element for reducing the number of registers and multiplexers.
Example:
The following graph shows the example of folding transformation.
The original DSP system produces y(n) at each unit time.
The transformed DSP system produces y(n) in each 2 l where each 2 l increase 1 n, index of y.
The resource used in original system are 2 adders, and the resource used in transformed system are 1 adder, 1 register, 3 multiplexer.
The functional block, adder, is therefore reduced.
Algorithm:
The DSP implementation in the folding algorithm is a Data flow graph(DFG), which is a graph composed of functional nodes and delay edges.
Another input for folding algorithm is folding set which is the function maps an operation unit of original DFG to an operation of transformed DFG with the number n <= N indicated the order of reused operation.
Given a DFG, a folding factor N, and folding set.
The transformation is performing: Creating folded nodes which are the node of the image of folding set.
Computing the delay elements for storing the distinguished data among different operation cycles as equation: DF(U→eV)=Nw(e)−PU+v−u where DF is the number of delay elements needed between element U,V , the operation units of original DFG.
w(e) is the delay elements used in original DFG between U,V .u is the order of U in the transformed operation block.
v is the order of V in the transformed operation block.
PU is the internal delay in transformed operation of U . Merging the delay elements forms the data path between the functional elements in transformed DFG.
Biquad filter example The following graph show the example of folding algorithm.
The folding set is {Si|j} where Si is the transformed operator and j is the order of such operator.
Therefore, the image of the folding set are S1,S2 representing adder and multiplier respectively.
Furthermore, in this example, we use the pipelining adder and multiplier which have 1 and 2 delay respectively in right graph.
Next, we compute the delay elements for storing the data.
DF(1→2)=4(1)−1+1−3=1 DF(1→5)=4(1)−1+0−3=0 DF(1→6)=4(1)−1+2−3=2 DF(1→7)=4(1)−1+3−3=3 DF(1→8)=4(2)−1+1−3=5 DF(3→1)=4(0)−1+3−2=0 DF(4→2)=4(0)−1+1−0=0 DF(5→3)=4(0)−2+2−0=0 DF(6→4)=4(1)−2+0−2=0 DF(7→3)=4(1)−2+2−3=1 DF(8→4)=4(1)−2+0−1=1 After computing the delay element needed, we construct the data path to connect the functional blocks with corresponding multiplexer.
The final graph is shown as below where {i,j} represents the switching moment.
Register minimization In the above example, if we perform register minimization, we could reduce the number of register significantly.
The technique for minimizing register is call lifetime analysis, which analyzes the time for when a data is produced and when a data finally s consumed.
Algorithm:
The time for producing a data is denoted Tinput , and the time for the last consumed data is denoted Toutput .Tinput=u+PU where u is the folding order of U and PU is the number of pipelining stages in the functional unit that executes u .Toutput for the node U is u+PU+maxV{DF(U→V)} Therefore, we could perform life time analysis from the above example as following table.
Algorithm:
From the life time analyzing above, we could analyze the minimal register needed. In this case, we construct the lifetime chart corresponding to the lifetime table in above.
For node 1, we plot a horizontal line from cycle 4 to 9 indicating that the data is need to be stored from cycle 4 to cycle 9.
In the same method, we could construct the chart to indicating that how many data need to be stored in each cycle.
Hence, cycle 6 needs to store 2 data. Maximum number of data need to be store d in this example is 2. Hence, we allocate 2 delay element for constructing the transformed data path.
After allocating 2 delay element for storing the temporary data, we need to schedule data stored at which register.
The following table shows the data stored in each register R1 and R2, such that the number of multiplexer could be minimized.
Finally, we could reconstruct the data path with fewer delay element and switching element in the folded design. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**D2-MAC**
D2-MAC:
D2-MAC is a satellite television transmission standard, a member of Multiplexed Analogue Components family. It was created to solve D-MAC's bandwidth usage by further reducing it, allowing usage of the system on cable and satellite broadcast. It could carry four high quality (15 kHz bandwidth) sound channels or eight lower quality audio channels. It was adopted by Scandinavian, German and French satellite broadcasts (CNBC Europe, TV3 (Sweden), TV3 (Denmark), EuroSport, NRK 1, TV-Sat 2, TDF 1, TDF 2, etc). The system was used until July 2006 in Scandinavia and until the mid-1990s for German and French sound channels.
Technical details:
MAC transmits luminance and chrominance data separately in time rather than separately in frequency (as other analog television formats do, such as composite video).
Audio, in a format similar to NICAM was transmitted digitally rather than as an FM sub-carrier.
The MAC standard included a standard scrambling system, EuroCrypt, a precursor to the standard DVB-CSA encryption system.
D2-MAC uses half the data rate of D-MAC (10.125 Mbit/s) D2-MAC has a reduced vision bandwidth, about 1/2 that of D-MAC.
D2-MAC retains most of the quality of a D-MAC signal—but consumes only 5 MHz of bandwidth.
History and politics:
MAC was developed by the UK's Independent Broadcasting Authority (IBA) and in 1982 was adopted as the transmission format for the UK's forthcoming direct broadcast satellite (DBS) television services (eventually provided by British Satellite Broadcasting). The following year MAC was adopted by the European Broadcasting Union (EBU) as the standard for all DBS.By 1986, despite there being two standards, D-MAC and D2-MAC, favoured by different countries in Europe, an EU Directive imposed MAC on the national DBS broadcasters, to provide a stepping stone from analogue PAL and SECAM formats to the eventual high definition and digital television of the future, with European TV manufacturers in a privileged position to provide the equipment required.
History and politics:
However, the Astra satellite system was also starting up at this time (the first satellite, Astra 1A was launched in 1989) and that operated outside of the EU's MAC requirements, due to being a non-DBS satellite. Despite further pressure from the EU (including a further Directive originally intended to make MAC provision compulsory in TV sets, and a subsidy to broadcasters to use the MAC format), most broadcasters outside Scandinavia preferred the lower cost of PAL transmission and receiving equipment.In the 2000s, the use of D-MAC and D2-MAC ceased when the satellite broadcasts of the channels concerned changed to DVB-S format. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**AMY2A**
AMY2A:
Pancreatic alpha-amylase is an enzyme that in humans is encoded by the AMY2A gene.Amylases are secreted proteins that hydrolyze 1,4-alpha-glucoside bonds in oligosaccharides and polysaccharides, and thus catalyze the first step in digestion of dietary starch and glycogen. The human genome has a cluster of several amylase genes that are expressed at high levels in either salivary gland or pancreas. This gene encodes an amylase isoenzyme produced by the pancreas. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clear aligners**
Clear aligners:
Clear aligners are orthodontic devices that are a transparent, plastic form of dental braces used to adjust teeth.Clear aligners have undergone changes, making assessment of effectiveness difficult. A 2014 systematic review concluded that published studies were of insufficient quality to determine effectiveness. Experience suggests they are effective for moderate crowding of the front teeth, but less effective than conventional braces for several other issues and are not recommended for children. In particular they are indicated for "mild to moderate crowding (1–6 mm) and mild to moderate spacing (1–6 mm)", in cases where there are no discrepancies of the jawbone. They are also indicated for patients who have experienced a relapse after fixed orthodontic treatment.Clear-aligner treatment involves an orthodontist or dentist, or with home-based systems, the person themselves, taking a mold of the patient's teeth, which is used to create a digital tooth scan. The computerized model suggests stages between the current and desired teeth positions, and aligners are created for each stage. Each aligner is worn for 22 hours a day for one or two weeks. These slowly move the teeth into the position agreed between the orthodontist or dentist and the patient. The average treatment time is 13.5 months. Despite patent infringement litigation, no manufacturer has obtained an injunction against another manufacturer.
Uses:
A 2014 systematic review concluded that there is insufficient evidence to determine the effectiveness of these clear aligners. Opinion is that they are likely useful for moderate front-teeth crowding. In those with teeth that are too far forward or backward, or rotated in the socket, the aligners are likely not as effective as conventional braces. More cases of relapse of the anterior teeth have been found with clear aligners compared with conventional braces. A 2013 Cochrane review found no high-quality evidence with respect to the management of the recurrence of lower-front-teeth misalignment following treatment.Clear aligners are more noticeable than lingual braces, but they can be removed, which makes cleaning of the teeth easier, and they are faster for the dentist to apply.
Uses:
Application Treatment begins with taking x-ray and photographs for diagnostic purposes, followed by capturing the patient's bite, teeth, and gums via a bite registration and polyvinyl siloxane impressions or an intra-oral digital scanner. The latter method has greatly increased in popularity in recent years as digital scanning technology has improved. The dentist/orthodontist completes a written evaluation that includes diagnosis and treatment plan. Dental impressions are scanned in order to create a digital 3D representation of the teeth. Technicians move the teeth to the desired location with the program Treat, which creates the stages between the current and desired teeth positions. Anywhere from six to eighty aligners may be needed during the first set of aligners. Each aligner moves teeth .25 to .33 millimeters. Additional and subsequent rounds of aligners, known as "Refinements", may be necessary to achieve desired tooth positions as clear aligners do not always achieve full movement in the first round.A computer graphic representation of the projected teeth movements, created in the software program ClinCheck, is provided to the doctor and patient for approval or modification before aligners are manufactured. The aligners are modeled using CAD-CAM (computer-aided-design and computer-aided-manufacturing) software and manufactured using a rapid prototyping technique called stereolithography. The molds for the aligners are built in layers using a photo-sensitive liquid resin that cures into a hard plastic when exposed to a laser. The aligners are made from an elastic thermoplastic material that applies pressure to the teeth to move into the aligner's position. Patients that need a tooth rotated or pulled down may have a small, tooth-colored composite attachment bonded onto certain teeth. Since the form-fitted plastic used in clear aligners is not as rigid as the metal used in traditional braces, sometimes the flexibility in the materials need to be compensated in the areas that require movement. Alternatively, attachments may be used to facilitate movement by changing the shape of the tooth. More attachments can make the aligners less aesthetically pleasing. Reproximation, (also called interproximal reduction or IPR and colloquially, filing or drilling), is sometimes used at the contacts between teeth to allow for a better fit.Each aligner is intended to be worn an optimal 22 hours a day for one to two weeks. On average the treatment process takes 13.5 months, although treatment time varies based on the complexity of the planned teeth movements. The aligner is removed for brushing, flossing and eating. As clear aligners are made from plastic, they can be warped by hot liquids. While undergoing treatment you should limit your intake of hot liquids to protect the shape of your aligners and stop them from becoming stained. Once the treatment period has concluded, the patient is advised to continue wearing a retainer at night for the foreseeable future.When the Invisalign system was first developed, many of the aligner manufacturing processes were carried out by hand, and computer technicians had to modify each tooth in the computerized model individually.
Brands:
Invisalign Invisalign is manufactured by Align Technology, an American multinational medical-device company. The company's clear align system has been used to treat more than 12.2 million patients.The company was founded in 1997 by Zia Chishti. Chishti conceived of the basic design of InvisAlign while an adult orthodontics patient. During his treatment with a retainer intended to complete his treatment, he posited that a series of such devices could effect a large final placement in a series of small movements.Sales began in the U.S. in 1999.Orthodontists were resistant to adopting Invisalign at first, in particular because the founders had no orthodontic credentials or expertise, but the product became popular among consumers. As of 2014, 80,000 dentists had been trained how to use it.
Brands:
Orthoclear Zia Chishti was ousted from Align Technology in 2002. In 2005 he developed Orthoclear, a similar product, which resulted in several legal disputes involving allegations of patent infringement, false advertising, defamation and trademark infringement. The case was settled in 2006. Align paid OrthoClear $20 million and OrthoClear agreed to end its operations.
Brands:
ClearCorrect ClearCorrect, LLC, based in Round Rock, Texas, was established in 2006. The company distributes its product throughout the United States, and in 2011 was named the fastest-growing health company in America by Inc. magazine. It has been reported that in 2017 its clear align system had been used to treat about 80,000 patients.ClearCorrect was founded in Houston, Texas, by Willis Pumphrey, Jr., a dentist. In 2001, Pumphrey started using Invisalign. He decided to switch to OrthoClear, because of the way OrthoClear manufactured its aligners and because of its reduced lab fees. When manufacture of Orthoclear ceased, Pumphrey had 400 patients in treatment. With no other options available, he started his own company to complete his patients' clear aligner treatment.
Direct to Consumer Brands:
SmileDirectClub SmileDirectClub, based in Nashville, Tennessee, was launched in 2014 as an alternative to in-office clear aligners. The company offers direct-to-consumer clear aligner service which can be availed from home, without having to see a dentist. This allows them to price aligners at half of the other competitors. The company faces criticism from the orthodontic community because of no involvement of a physical dentist. Similar startups have also launched in different parts of the world.
Direct to Consumer Brands:
Direct to consumer clear aligner services Following SmileDirectClub, multiple companies provide aligner treatment to patients by mailing their aligners and monitoring the treatment through a digital platform. The American Association of Orthodontists, the largest society representing orthodontists in the United States and abroad, released consumer alert explaining potential risks associated with such services.
Society and culture:
Lawsuits The litigious history in the clear aligner market prompted ClearCorrect to be proactive in addressing patent issues between itself and Align Technology. Align had previously filed a complaint with the U.S. International Trade Commission against OrthoClear Inc. In the end an agreement was made between Align and OrthoClear in which Align paid OrthoClear $20 million for its intellectual property and OrthoClear agreed to stop accepting cases in the United States.In 2009, Align Technology began to require that doctors prescribing Invisalign complete at least ten cases per year and ten hours of training in order to maintain their Invisalign provider status. In January 2010, 20,000 doctors had their certification suspended for not meeting the requirements, but a class action lawsuit regarding providers that paid for training under the original rules resulted in some certifications being re-instated.In February 2009, ClearCorrect filed a declaratory judgment against Align Technology. ClearCorrect claimed that some of Align's patents were invalid, and thus ClearCorrect's product did not infringe on Align’s patents. ClearCorrect voluntarily dismissed the suit in April 2009, after Align stated to the court that it had no intention of suing ClearCorrect for patent infringement.On February 28, 2011, Align Technology filed two lawsuits against ClearCorrect. Align alleged that under California’s Unfair Practices Act, that ClearCorrect sold products for a price below the average production cost, with the purpose of "destroying competition in the market for clear aligner systems". Align also claimed that ClearCorrect infringed eight of Align's patents.On May 12, 2011, ClearCorrect filed a countersuit against Align Technology, denying allegations that it is infringing on Align patents. In the countersuit, ClearCorrect alleged that Align's allegations are invalid and accused Align of patent misuse and double patenting. The countersuit cited much of the evidence raised in Align's previous patent case against Ormco, which resulted in a federal court ruling that 11 of Align's patent claims were invalid.
Costs:
The cost of clear aligners is typically lower than their all-metal counterparts because of the materials and technology used. Prices are rarely impacted by factors such as the duration of the treatment and the extent of the issues requiring correction, as much as specific details of the treatment plan, time/age of treatment, location, the experience of the orthodontist. In countries such as Australia, Canada, United States and United Kingdom prices are fairly similar across providers at circa $2,000/£1,500. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subtalar joint**
Subtalar joint:
In human anatomy, the subtalar joint, also known as the talocalcaneal joint, is a joint of the foot. It occurs at the meeting point of the talus and the calcaneus.
The joint is classed structurally as a synovial joint, and functionally as a plane joint.
Structure:
The talus is oriented slightly obliquely on the anterior surface of the calcaneus.
There are three points of articulation between the two bones: two anteriorly and one posteriorly. The three articulations are known as facets, and they are the posterior, middle and anterior facets.
At the anterior and middle talocalcaneal articulation, convex areas of the talus fits on to concave surfaces of the calcaneus.
Structure:
The posterior talocalcaneal articulation is formed by a concave surface of the talus and a convex surface of the calcaneus.The sustentaculum tali forms the floor of middle facet, and the anterior facet articulates with the head of the talus, and sits lateral and congruent to the middle facet. In some people the middle and anterior facets are joined giving just one articulation. The posterior facet is the largest of the three, and separated from the others by the tarsal canal.
Structure:
Ligaments and membranes The main ligament of the joint is the interosseous talocalcaneal ligament, a thick, strong band of two partially joined fibers that bind the talus and calcaneus. It runs through the sinus tarsi, a canal between the articulations of the two bones.
There are four additional ligaments that form weaker connections between the talus and calcaneus.
The anterior talocalcaneal ligament (or anterior interosseous ligament) attaches at the neck of the talus on the front and lateral surfaces to the superior calcaneus.
The short band of the posterior talocalcaneal ligament extends from the lateral tubercle of the talus to the upper medial calcaneus.
The short, strong lateral talocalcaneal ligament connects from the lateral talus under the fibular facet to the lateral calcaneus, and runs parallel to the calcaneofibular ligament.
The medial talocalcaneal ligament extends from the medial tubercle of the talus to the sustentaculum tali on the medial surface of the calcaneus.A synovial membrane lines the capsule of the joint, and the joint is wrapped in a capsule of short fibers that are continuous with the talocalcaneonavicular and calcaneocuboid joints of the foot.
Function:
The joint allows inversion and eversion of the foot, but plays minimal role in dorsiflexion or plantarflexion of the foot. The centre of rotation of the subtalar joint is thought to be in the region of the middle facet.It is considered a plane synovial joint, also commonly referred to as a gliding joint. It acts as a hinge connecting the talus and calcaneus. There is extensive variation in the inclination from horizontal.The subtalar joint can also be considered a combination of the anatomic subtalar joint discussed above, and also the talocalcaneal part of the talocalcaneonavicular joint. This is the more common view of the subtalar joint when discussing its movement. When both of these articulations are accounted together, it allows for pronation and supination of the midfoot to occur.
Pathology:
The subtalar joint is particularly susceptible to arthritis, especially when it has previously been affected by sprains or fractures such as those of the calcaneum or talus. Symptoms of subtalar joint arthritis include pain when walking, loss of motion through the joint's range of motion, and difficulty walking on uneven surfaces. Physical therapy, orthotics, and surgery are the main treatment options.
Pathology:
In flat feet, the joint is typically more horizontal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Design sprint**
Design sprint:
A design sprint is a time-constrained, five-phase process that uses design thinking with the aim of reducing the risk when bringing a new product, service or a feature to the market. The process aims to help teams to clearly define goals, validate assumptions and decide on a product roadmap before starting development. It seeks to address strategic issues using interdisciplinary expertise, rapid prototyping, and usability testing. This design process is similar to Sprints in an Agile development cycle.
How it started:
There are multiple origins to the concept of mixing Agile and Design Thinking. The most popular was developed by a multi-disciplinary team working out of Google Ventures. The initial iterations of the approach were created by Jake Knapp, and popularised by a series of blog articles outlining the approach and reporting on its successes within Google. As it gained industry recognition, the approach was further refined and added to by other Google staff including Braden Kowitz, Michael Margolis, John Zeratsky and Daniel Burka.It was later published in a book published by Google Ventures called "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days"..
Possible uses:
Claimed uses of the approach include Launching a new product or a service.
Extending an existing experience to a new platform.
Existing MVP needing revised User experience design and/or UI Design.
Adding new features and functionality to a digital product.
Opportunities for improvement of a product (e.g. a high rate of cart abandonment) Opportunities for improvement of a service.
Supporting organizations in their transformation towards new technologies (e.g., AI).
Phases:
The creators of the design sprint approach, recommend preparation by picking the proper team, environment, materials and tools working with six key 'ingredients'.
Understand: Discover the business opportunity, the audience, the competition, the value proposition, and define metrics of success.
Diverge: Explore, develop and iterate creative ways of solving the problem, regardless of feasibility.
Converge: Identify ideas that fit the next product cycle and explore them in further detail through storyboarding.
Prototype: Design and prepare prototype(s) that can be tested with people.
Test: Conduct 1:1 usability testing with 5-6 people from the product's primary target audience. Ask good questions.
Deliverables:
The main deliverables after the Design sprint: Answers to a set of vital questions Findings from the sprint (notes, user journey maps, storyboards, information architecture diagrams, etc.) Prototypes Report from the usability testing with the findings (backed by testing videos) A plan for next steps Validate or invalidate hypotheses before committing resources to build the solution
Team:
The suggested ideal number of people involved in the sprint is 4-7 people and they include the facilitator, designer, a decision maker (often a CEO if the company is a startup), product manager, engineer and someone from companies core business departments (Marketing, Content, Operations, etc.).
Variants:
The concept sprint is a fast five-day process for cross-functional teams to brainstorm, define, and model new approaches to business issue. Another common variant is the Service Design Sprint, an approach to Design Sprints created in 2014 that uses Service Design tools and mechanics to tackle service innovation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bandage**
Bandage:
A bandage is a piece of material used either to support a medical device such as a dressing or splint, or on its own to provide support the movement of a part of the body. When used with a dressing, the dressing is applied directly on a wound, and a bandage used to hold the dressing in place. Other bandages are used without dressings, such as elastic bandages that are used to reduce swelling or provide support to a sprained ankle. Tight bandages can be used to slow blood flow to an extremity, such as when a leg or arm is bleeding heavily.
Bandage:
Bandages are available in a wide range of types, from generic cloth strips to specialized shaped bandages designed for a specific limb or part of the body. Bandages can often be improvised as the situation demands, using clothing, blankets or other material. In American English, the word bandage is often used to indicate a small gauze dressing attached to an adhesive bandage.
Types:
Gauze bandage (common gauze roller bandage) The most common type of bandage is the gauze bandage, a woven strip of material with a Telfa absorbent barrier to prevent adhering to wounds. A gauze bandage can come in any number of widths and lengths and can be used for almost any bandage application, including holding a dressing in place.
Adhesive bandage Liquid bandage Compression bandage The term 'compression bandage' describes a wide variety of bandages with many different applications.
Types:
Short stretch compression bandages are applied to a limb (usually for treatment of lymphedema or venous ulcers). This type of bandage is capable of shortening around the limb after application and is therefore not exerting ever-increasing pressure during inactivity. This dynamic is called resting pressure and is considered safe and comfortable for long-term treatment. Conversely, the stability of the bandage creates a very high resistance to stretch when pressure is applied through internal muscle contraction and joint movement. This force is called working pressure.
Types:
Long stretch compression bandages have long stretch properties, meaning their high compressive power can be easily adjusted. However, they also have a very high resting pressure and must be removed at night or if the patient is in a resting position.
Types:
Triangular bandage Also known as a cravat bandage, a triangular bandage is a piece of cloth put into a right-angled triangle, and often provided with safety pins to secure it in place. It can be used fully unrolled as a sling, folded as a normal bandage, or for specialized applications, as on the head. One advantage of this type of bandage is that it can be makeshift and made from a fabric scrap or a piece of clothing. The Boy Scouts popularized use of this bandage in many of their first aid lessons, as a part of the uniform is a "neckerchief" that can easily be folded to form a cravat.
Types:
Tube bandage A tube bandage is applied using an applicator, and is woven in a continuous circle. It is used to hold dressings or splints on to limbs, or to provide support to sprains and strains, so that it stops bleeding.
Kirigami bandage A new type of bandage was invented in 2016; inspired by the art of kirigami, it uses parallel slits to better fit areas of the body that bend. The bandages have been produced with 3D-printed molds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trophy**
Trophy:
A trophy is a tangible, durable reminder of a specific achievement, and serves as a recognition or evidence of merit. Trophies are often awarded for sporting events, from youth sports to professional level athletics. In many sports medals (or, in North America, rings) are often given out either as the trophy or along with more traditional trophies.
Trophy:
Originally the word trophy, derived from the Greek tropaion, referred to arms, standards, other property, or human captives and body parts (e.g., headhunting) captured in battle. These war trophies commemorated the military victories of a state, army or individual combatant. In modern warfare trophy taking is discouraged, but this sense of the word is reflected in hunting trophies and human trophy collecting by serial killers.
Etymology:
Trophies have marked victories since ancient times. The word trophy, coined in English in 1550, was derived from the French trophée in 1513, "a prize of war", from Old French trophee, from Latin trophaeum, monument to victory, variant of tropaeum, which in turn is the latinisation of the Greek τρόπαιον (tropaion), the neuter of τροπαῖος (tropaios), "of defeat" or "for defeat", but generally "of a turning" or "of a change", from τροπή (tropē), "a turn, a change" and that from the verb τρέπω (trepo), "to turn, to alter".In ancient Greece, trophies were made on the battlefields of victorious battles, from captured arms and standards, and were hung upon a tree or a large stake made to resemble a warrior. Often, these ancient trophies were inscribed with a story of the battle and were dedicated to various gods. Trophies made about naval victories sometimes consisted of entire ships (or what remained of them) laid out on the beach. To destroy a trophy was considered a sacrilege.The ancient Romans kept their trophies closer to home. The Romans built magnificent trophies in Rome, including columns and arches atop a foundation. Most of the stone trophies that once adorned huge stone memorials in Rome have been long since stolen.
History:
In ancient Greece, the winners of the Olympic games initially received no trophies except laurel wreaths. Later the winner also received an amphora with sacred olive oil. In local games, the winners received different trophies, such as a tripod vase, a bronze shield or a silver cup.
In ancient Rome, money usually was given to winners instead of trophies.
History:
Chalices were given to winners of sporting events at least as early as the very late 1600s in the New World. For example, the Kyp Cup (made by silversmith Jesse Kyp), a small, two-handled, sterling cup in the Henry Ford Museum, was given to the winner of a horse race between two towns in New England in about 1699. Chalices, particularly, are associated with sporting events, and were traditionally made in silver. Winners of horse races, and later boating and early automobile races, were the typical recipients of these trophies. The Davis Cup, Stanley Cup, America's Cup and numerous World Cups are all now famous cup-shaped trophies given to sports winners.Today, the most common trophies are much less expensive, and thus much more pervasive, thanks to mass-produced plastic/resin trophies.
History:
The oldest sports trophies in the world are the Carlisle Bells, a horse racing trophy dating back to 1559 and 1599 and were first awarded by Elizabeth I. The race has been run for over 400 years in Carlisle, Cumbria, United Kingdom. The bells are on show at the local museum, Tullie House, which houses a variety of historic artifacts from the area from Roman legions to present day.
Types:
Contemporary trophies often depict an aspect of the event commemorated, for example in basketball tournaments, the trophy takes the shape of a basketball player, or a basketball. Trophies have been in the past objects of use such as two-handled cups, bowls, or mugs (all usually engraved); or representations such as statues of people, animals, and architecture while displaying words, numbers or images. While trophies traditionally have been made with metal figures, wood columns, and wood bases, in recent years they have been made with plastic figures and marble bases. This is to retain the weight traditionally associated with a quality award and make them more affordable to use as recognition items. Trophies increasingly have used resin depictions.
Types:
The Academy Awards Oscar is a trophy with a stylized human; the Hugo Award for science fiction is a space ship; and the Wimbledon awards for its singles champions are a large loving cup for men and a large silver plate for women.
Types:
A loving-cup trophy is a common variety of trophy; it is a cup shape, usually on a pedestal, with two or more handles, and is often made from silver or silver plate.Hunting trophies are reminders of successes from hunting animals, such as an animal's head mounted to be hung on a wall. There's also people who get their animals Taxidermy, where you can have just the head, or you can have the full animal stuffed; and put out for show.
Types:
Perpetual trophies are held by the winner until the next event, when the winner must compete again in order to keep the trophy. In some competitions winners in a certain number of consecutive or non-consecutive events receive the trophy or its copy in permanent ownership.
Sporting:
Trophies have been awarded for team, or individual accomplishments in sports. Many combat sports, such as boxing, mixed martial arts, and professional wrestling use championship belts as trophies; however, unlike most of the trophies mentioned below, a new one is not created every time a new champion is crowned; rather, the new champion takes the belt from the old one.
Sporting:
Association football Trophies in the sport include: Copa Libertadores Trophy — Known simply as Libertadores or Copa, awarded to the winners of the Copa Libertadores since 1960. It is one of the most prestigious laurels in the Western Hemisphere.
Sporting:
The FA Cup — Awarded to winners of the primary English domestic football knockout tournament, officially The Football Association Challenge Cup, often referred to as just the FA Cup. The FA Cup was inaugurated in 1871 and is therefore the oldest tournament in club football. The original trophy, however, was stolen in 1895 and the current trophy design, which was first awarded in 1911, is actually the fifth incarnation in total.
Sporting:
The Women's FA Cup – Awarded to winners of the primary English women's domestic football knockout tournament. Inaugurated in 1971, it was one of the first prestigious trophies to be made by the Thomas Lyte silver workshop.
Sporting:
FIFA World Cup Trophy — Awarded to the winners of the FIFA World Cup from the 1974 FIFA World Cup onwards. Previous winners were awarded the Jules Rimet Trophy (known simply as Victory until 1949), which was awarded in perpetuity to Brazil after their 3rd win in the 1970 FIFA World Cup. Both are referred to colloquially as the World Cup FIFA Women's World Cup Winner's Trophy – Awarded to the winners of the FIFA Women's World Cup from the 1999 FIFA Women's World Cup onwards. Unlike the men's World Cup trophy, the women's trophy is constructed anew for each champion to keep.
Sporting:
European Champion Clubs' Cup – colloquially the European Cup, awarded to the winners of the European Cup (before 1992–93) and the UEFA Champions League (since 1992–93). It is the most prestigious trophy in the Eastern Hemisphere. It is affectionately known as "old big ears" due to its shape.
Philip F. Anschutz Trophy – Awarded to the winners of the MLS Cup, the MLS' championship game UEFA Super Cup — Awarded to the winners of a one-off match between the winners of the UEFA Champions League and the UEFA Europa League.
Sporting:
The Scottish Cup — Awarded to the winners of the primary domestic knockout cup tournament of Scotland (the Scottish Football Association Challenge Cup, or just Scottish Cup). The tournament was founded in 1873 and still presents the original trophy. The Scottish Cup is therefore the world's oldest national football trophy and second oldest national trophy, behind the Carlisle Bells race trophy dating back to 1599.Other notable trophies in the sport includes the Jules Rimet Trophy. The original was stolen in Brazil in 1983 and has never been recovered. Replicas were awarded to winning nations up to the retirement of the genuine trophy. However, prior to the 1966 final, The Football Association made an (unauthorised) replica in secret in gilded bronze for use in post-match celebrations due to security concerns – the genuine trophy was made out of close to 2 kg of pure gold. This has led to several conspiracy theories regarding which trophy was stolen – the FA replica, or the real trophy. FIFA purchased the replica for £254,500 (ten times the reserve price) in 1997, with the inflated price attributed to such rumours. This trophy is held on behalf of FIFA by the National Football Museum in Preston. The current FIFA World Cup trophy inscribe the names of the teams that won the award underneath the base of the trophy.
Sporting:
A club that manages to win the Copa Libertadores trophy three consecutive times retain the trophy permanently. The current trophy has been used since 1975. Like the FIFA World Cup trophy, the winners of each edition of the tournament has their name inscribed on the trophy; unlike the FIFA World Cup trophy, a pedestal contains a list of winners in the form of badges. The current pedestal is the fourth in the trophy's history, having been used since 2009. The original trophy was awarded to Estudiantes de La Plata in 1970 (after their third win) – the present trophy is the third, identical edition.Until 2009, clubs that win the European Champion Clubs' Cup three times in successive seasons, or five times in total, were permitted to retain the trophy in perpetuity. The present trophy has been used since 2005–06 after Liverpool's fifth win in 2005. The original trophy was awarded to Real Madrid in 1966 (after their sixth win) — the present trophy is the sixth incarnation overall.
Sporting:
Four trophies have served as an award (out of five made) for the winner of the FA Cup. The first (1871–1895) was stolen in Birmingham and melted down, the second (1896–1910) was presented to Lord Kinnaird and is held by David Gold, the chairman of Birmingham City after private auction in 2005. The third (1910–1992) was retired after the 1992 final due to fragility and is held by The Football Association; two exact replicas of it were made, one of which has been awarded to the winners (1993–2013), the other remains as a backup in case of damage to the primary trophy. The same design was recast and was unveiled in 2014 to be more durable.
Sporting:
Australian rules football AFL Premiership Cup – Awarded to the Australian Football League's Premier/ Winner of the AFL Grand Final McClelland Trophy – Awarded to the Australian Football League's Home & Away season / Minor Premiership champion.
Dockland Trophy - Awarded to the Australian Football League's best dock-side team in games between Fremantle and Port Adelaide.
Sporting:
Baseball Commissioner's Trophy – Awarded to Major League Baseball's World Series Champion Basketball Jun Bernardino Trophy – Awarded to the Philippine Basketball Association Philippine Cup champion Larry O'Brien Championship Trophy – Awarded to the National Basketball Association's champion Naismith Trophy – Awarded to the FIBA World Cup champions Cricket Cricket World Cup Trophy – Awarded to the Winners of the ICC Cricket World Cup.
Sporting:
The Ashes urn – Awarded to the winning team of the biennial cricket Test series between England and Australia. However, the urn itself has never been a trophy and remains in the MCC Cricket Museum at Lord's Cricket Ground. Only from 1998 to 1999 were the winners of the Ashes presented with a replica (not to scale) of the urn in Waterford Crystal.
Sporting:
Border-Gavaskar trophy - Awarded to the winning team of the biennial cricket Test series between India and Australia.
Sporting:
Gaelic football Sam Maguire Cup – Awarded to the winners of the All-Ireland Senior Football Championship Gridiron football American football BCS Trophy – Awarded to College Football's National Champion College Football Playoff National Championship Trophy – Awarded to the winner of the College Football Playoff Heisman Trophy – Awarded to the NCAA's Most Valuable Player in College Football Vince Lombardi Trophy – Awarded to the National Football League's Super Bowl champion Canadian football Grey Cup – Awarded to the Canadian Football League's champion Vanier Cup – Awarded to the U Sports Canadian football champion Golf Claret Jug — Awarded to the winner of The Open Championship Ryder Cup — Awarded to the winner of a biennial competition between Europe and the USA Wanamaker Trophy - Awarded to the winner of the PGA Championship Horse racing Arlington Million Trophy – Awarded to the winner of the Arlington Million August Belmont Trophy – Awarded to the winner of the Belmont Stakes Haskell Invitational Trophy – Awarded to the winner of the Haskell Invitational Stakes Kentucky Derby Trophy – Awarded to the winner of the Kentucky Derby Kentucky Oaks Trophy – Awarded to the winner of the Kentucky Oaks Man o' War Cup – Awarded to the winner of the Travers Stakes Triple Crown Trophy – Awarded to the winner of the United States Triple Crown of Thoroughbred Racing Woodlawn Vase – Awarded to the winner of the Preakness Stakes. Most valuable trophy in sports at $4,000,000+ US dollars. Designed by Tiffany & Co. in 1860 Hurling Liam MacCarthy Cup – Awarded to the winners of the All-Ireland Senior Hurling Championship.
Sporting:
Ice hockey Aurora Borealis Cup – Awarded to the Naisten Liiga's playoff champion Clarkson Cup – Awarded to the top team in Canadian women's ice hockey Gagarin Cup – Awarded to the Kontinental Hockey League's playoff champion Kanada-malja – Awarded to the Liiga's playoff champion Le Mat Trophy – Awarded to the Swedish Hockey League playoff champion O'Brien Trophy – Awarded to the National Hockey Association playoff champion from 1910 to 1917. Awarded to the National Hockey League (NHL) playoff champion from 1921 to 1927. After the 1927 NHL playoffs, the trophy was re-purposed and awarded for other accomplishments, before it was retired from use in 1950.
Sporting:
Memorial Cup – Awarded to the winner of the Canadian Hockey League Stanley Cup – Awarded to the NHL's playoff champion. Previously served as a challenge cup for Canadian clubs from 1893 to 1914, and as the trophy for interleague tournaments from 1914 to 1926. It became the de facto NHL playoff trophy in 1927, and the de jure NHL playoff trophy in 1947 through an agreement with the Stanley Cup trustees.
Sporting:
Spengler Cup – Awarded to the winner of an invitational tournament hosted by HC Davos Lacrosse Champion's Cup – Awarded to the National Lacrosse League champion Mann Cup – An Indoor Lacrosse trophy awarded to the senior men's lacrosse champions of Canada.
Steinfeld Cup – Awarded to the Major League Lacrosse champion Motorsport APBA Gold Cup — Awarded to H1 Unlimited’s APBA Gold Cup Champion. It is the oldest trophy in motorsports.
British Grand Prix Trophy — Awarded to the winner of the Formula One British Grand Prix.
Borg-Warner Trophy — Awarded to the Indianapolis 500 Champion.
Harley J. Earl Trophy — Awarded to the Daytona 500 Champion.
Sporting:
Sprint Cup Trophy – Awarded to the NASCAR's Sprint Cup Series Champion Tennis Wimbledon tennis trophies – although having no formal name, a cup is presented to the Wimbledon Men's (Gentlemen's) Singles Champion (The All England Lawn Tennis Club Single Handed Champion of the World, as stated on the cup itself). The women's (Ladies) Singles winner is presented with the Venus Rosewater Dish. Other trophies are presented to the winners of the Doubles and Mixed Doubles.
Sporting:
Rugby football Rugby league Challenge Cup – rugby league's oldest knock-out competition. Notable for the wide range of teams which start, some taken from amateur ranks, "developing nations" and university team Paul Barrière Trophy – awarded to the champions of the Rugby League World Cup Rugby union Bledisloe Cup – Awarded annually to the winner between New Zealand and Australia.
Sporting:
Calcutta Cup – Awarded annually to the winner between England and Scotland in rugby union Webb Ellis Cup – Awarded to rugby union's World Champion Champions Cup - winner of European Championship Six Nations Cup - awarded to winner of six nations competition Sailing America's Cup – Awarded to Yacht Racing Champion Jules Verne Trophy – Awarded to the any type of yacht that circumnavigate the world the fastest
Military:
The United States military also issues a type of trophy which are known as "non-portable decorations". This indicates that the trophy carries the status of a military award, but is not meant to be worn on a uniform but rather is presented for static display. Such military trophies include athletic excellence awards, unit excellence awards, and superior service awards presented annually to the top service member of a command.
Professional awards:
Many professional associations award trophies in recognition of outstanding work in their respective fields. Some examples of such awards include: Academy Award – Awarded by the American Academy of Motion Picture Arts and Sciences for excellence in the Film Industry.
Collier Trophy – Awarded by the US National Aeronautics Association for outstanding work in aviation engineering.
Harmon Trophy – Awarded by the Clifford B. Harmon Trust for outstanding achievement in aviation or ballooning.
Tony Award – Awarded by the American Theatre Wing and The Broadway League for excellence in live theater in New York City.
Emmy Award – Awarded by the Academy of Television Arts & Sciences and National Academy of Television Arts and Sciences for excellence in the Television industry.
Grammy Award – Awarded by the National Academy of Recording Arts and Sciences for excellence in the Music industry.
Golden Globes – Awarded by the Hollywood Foreign Press Association recognizing excellence in film and television. The statuettes are manufactured by the New York firm Society Awards.
MTV Video Music Award – Awarded by MTV to honor the best in the music video medium. The moonman is manufactured by the New York firm Society Awards.
Academy of Country Music Awards – Awarded for achievements in country music. The "hat" trophy is manufactured by the New York firm Society Awards.
Billboard Music Award – Awarded by billboard to honor outstanding chart performance. The trophy is manufactured by the New York firm Society Awards.
NAACP Image Award – Awarded for excellence in film, television, music, and literature by outstanding people of color. The trophy is manufactured by the New York firm Society Awards.
D&AD Awards - Awarded for excellence within design and advertising.
Hunting:
In hunting, although competition trophies like those mentioned above can be awarded, the word trophy more typically refers to an item made from the body of a killed animal and kept as a keepsake. See taxidermy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Citrine (colour)**
Citrine (colour):
Citrine is a colour, the most common reference for which is certain coloured varieties of quartz which are a medium deep shade of golden yellow. Citrine has been summarized at various times as yellow, greenish-yellow, brownish yellow or orange.The original reference point for the citrine colour was the citron fruit. The first recorded use of citrine as a colour in English was in 1386. It was borrowed from a medieval Latin and classical Latin word with the same meaning. In late medieval and early modern English the citrine colour-name was applied in a wider variety of contexts than it is today and could be "reddish or brownish yellow; or orange; or amber (distinguished from yellow)".In today's English citrine as a colour is mostly confined to the contexts of (1) gemstones, including quartz, and (2) some animal and plant names. E.g., the citrine wagtail (Motacilla citreola), an Asian bird species with golden-yellow plumage, or the citrine warbler, citrine canary-flycatcher, citrine forktail, etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Microprobe**
Microprobe:
A microprobe is an instrument that applies a stable and well-focused beam of charged particles (electrons or ions) to a sample.
Types:
When the primary beam consists of accelerated electrons, the probe is termed an electron microprobe, when the primary beam consists of accelerated ions, the term ion microprobe is used. The term microprobe may also be applied to optical analytical techniques, when the instrument is set up to analyse micro samples or micro areas of larger specimens. Such techniques include micro Raman spectroscopy, micro infrared spectroscopy and micro LIBS. All of these techniques involve modified optical microscopes to locate the area to be analysed, direct the probe beam and collect the analytical signal.
Types:
A laser microprobe is a mass spectrometer that uses ionization by a pulsed laser and subsequent mass analysis of the generated ions.
Uses:
Scientists use this beam of charged particles to determine the elemental composition of solid materials (minerals, glasses, metals). The chemical composition of the target can be found from the elemental data extracted through emitted X-rays (in the case where the primary beam consists of charged electrons) or measurement of an emitted secondary beam of material sputtered from the target (in the case where the primary beam consists of charged ions).
Uses:
When the ion energy is in the range of a few tens of keV (kilo-electronvolt) these microprobes are usually called FIB (Focused ion beam). An FIB makes a small portion of the material into a plasma; the analysis is done by the same basic techniques as the ones used in mass spectrometry.
Uses:
When the ion energy is higher, hundreds of keV to a few MeV (mega-electronvolt) they are called nuclear microprobes. Nuclear microprobes are extremely powerful tools that utilize ion beam analysis techniques as microscopies with spot sizes in the micro-/nanometre range. These instruments are applied to solve scientific problems in a diverse range of fields, from microelectronics to biomedicine. In addition to the development of new ways to exploit these probes as analytical tools (this application area of the nuclear microprobes is called nuclear microscopy), strong progress has been made in the area of materials modification recently (most of which can be described as PBW, proton beam writing).
Uses:
The nuclear microprobe's beam is usually composed of protons and alpha particles. Some of the most advanced nuclear microprobes have beam energies in excess of 2 MeV. This gives the device very high sensitivity to minute concentrations of elements, around 1 ppm at beam sizes smaller than 1 micrometer. This elemental sensitivity exists because when the beam interacts with the a sample it gives off characteristic X-rays of each element present in the sample. This type of detection of radiation is called PIXE. Other analysis techniques are applied to nuclear microscopy including Rutherford backscattering(RBS), STIM, etc.
Uses:
Another use for microprobes is the production of micro and nano sized devices, as in microelectromechanical systems and nanoelectromechanical systems. The advantage that microprobes have over other lithography processes is that a microprobe beam can be scanned or directed over any area of the sample. This scanning of the microprobe beam can be imagined to be like using a very fine tipped pencil to draw your design on a paper or in a drawing program. Traditional lithography processes use photons which cannot be scanned and therefore masks are needed to selectively expose your sample to radiation. It is the radiation that causes changes in the sample, which in turn allows scientists and engineers to develop tiny devices such as microprocessors, accelerometers (like in most car safety systems), etc. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lucas wedge**
Lucas wedge:
The Lucas wedge is an economic measure of how much higher the gross domestic product would have been if it grew as fast as it should have. It shows the loss from deadweight caused by poor or inefficient economic policy choices. A Lucas wedge was named after Robert E. Lucas Jr. an American economist who won the 1995 Nobel Memorial Prize in Economic Sciences for his research on rational expectations.
Lucas wedge:
The Lucas wedge is not the same as the Okun's Law. While they are similar and often confused, the gap from Okun's Law measures the difference over a period of time between the actual GDP and the GDP that would have been realized at full employment. Over time the Lucas wedge compounds and increases and so it is usually larger than the gap from Okun's Law. This shows that the goal of economic policy should be more than just realizing full employment but should also focus on optimizing investment to reduce the Lucas wedge.
Lucas wedge:
The Lucas wedge is sometimes expressed in per capita terms to reflect how much better a person's standard of living would be in the absence of this gap. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BAT4**
BAT4:
Protein BAT4 is a protein that in humans is encoded by the BAT4 gene.A cluster of genes, BAT1-BAT5, has been localized in the vicinity of the genes for TNF alpha and TNF beta. These genes are all within the human major histocompatibility complex class III region. The protein encoded by this gene is thought to be involved in some aspects of immunity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Automated breathing metabolic simulator**
Automated breathing metabolic simulator:
An Automated breathing metabolic simulator (ABMS) simulates human breathing and metabolism through mechanical means respiration. ABMS technology is used as a platform for qualification and evaluation of Respiratory protective equipment like a Closed-Circuit breathing apparatus or N95 mask, usually by government or commercial entities. In the US, this regulatory function is performed through the National Institute for Occupational Safety and Health (NIOSH) and National Personal Protective Technology Laboratory (NPPTL) (Kyiazi, 1986)
Requirements:
To simulate human respiration and metabolism, an automated breathing metabolic simulator needs to consume and produce the inputs and outputs of human respiration. These include: Carbon dioxide production Oxygen consumption Inhalation/exhalation pressures Carbon dioxide percentage Temperatures Respiratory frequencies while monitoring oxygen percentage(Wischhoefer, 1984)
History:
Respiratory researchers have long desired an accurate method of simulating the breathing and metabolic processes of a human in a laboratory setting. Simulators have become a vital resource for research institutes, regulatory centers, testing houses, and manufacturers due to their ability to generate reproducible data without the risk of direct human exposure to a tested device. Prior to the early 1980s, accurately simulating human breathing in a controlled and repetitive manner was not possible due to the lack of technological development and human knowledge of the respiratory system.With the first models of human respiration, basic Breathing Metabolic Simulators (BMS) were devised. These relied upon a manual pump that a technician would use to simulate human breathing. These BMS were criticized for testing inaccuracies which led to unexpected respiratory protective device risks. The first BMS was developed by U.S. Bureau of Mines in 1973. The U.S. Bureau of Mines continued funding the development of BMS's until 1985, when funding stalled. These efforts led to a design that is currently used in parts of NIOSH-NPPTL and the U.S. Navy (Sinkule, 2013).
History:
The first BMS was designed under the following guidelines: The simulations produced by the BMS were to be as physiologically appropriate as possible.
The construction was to employ low cost methods using standard, commercially available items wherever possible.
Operation was to be simple and easily learned. Complex computer programs were to be avoided.
The BMS was to be capable of manual as well as automatic operation. All data inputs into the computer were to be paralleled with analog outputs suitable for general purpose laboratory recorders(Development Of An Automated Breathing Metabolic Simulator, 1984).
Current manufacturers of BMS's include: CSE, Dräger (Kyriazi, 2011), and Ocenco (Kyriazi 2011).
Automation Automated simulators use the same mechanical principles as the BMS, updated with electronic components, thus offering more accurate control. The first ABMS contained three modules designed to be able to function independently from each other or work as a unit: a Breathing Simulator Module, a Gas Analysis Module, and a Supervisory Controller (Wischhoefer, L.L., & Reimers, 1984).
The first automated breathing simulator concepts used a bellows design. After numerous prototypes, bellows were found to be inconsistent in generating precision volumes of human breathing. Subsequently, bellows were replaced with a rigid piston design. The rigid piston solved the problem of repeatability and proved that controlled breathing waveforms were possible, subject to the precision of motor drivers.
History:
Oxygen consumption has been simulated through various methods over the history of BMS development. Older methods of oxygen consumption include catalytic conversion, which produces hydrogen. This method saw limited use. ABMS manufacturers eventually began to prefer to remove a mixture of the breathing media, analyze the sample for the oxygen content, and then adjust the flow to contain the correct number of liters required for the correct oxygen consumption rate. At various points in the breathing cycle, CO2 represents a small percentage of the gas mix withdrawn to "consume" oxygen. This gas is generally analyzed separately and then algorithmically re-injected to compensate for artifacts of measurement.
History:
BMS's utilized rotameters in the 1970s and 1980s to simulate oxygen consumption. These tools were used to replicate the changing percentage of gases in the mixture being withdrawn to accomplish oxygen consumption. When the gasses were withdrawn, nitrogen would be reintroduced into the mix. This method proved inaccurate due to requiring immediate calculations by human operators to manually alter gas flow rates. In newer ABMS designs, the job of rotameters has been left to mass flow controllers. Now, mass flow controllers, in conjunction with high speed gas analyzers, provide continuous updates and inputs into the algorithm to calculate oxygen consumption through a range a gas mixtures.
History:
Past BMS designs also utilized paper strip chart recorders to preserve a record of data. This resulted in requiring days of analyzing to have data in a usable format. Modern ABMS's work digitally allowing for manipulation of the data through the software that is preinstalled. This change allows for the users to be able to have added functionality and better control over the operations of the ABMS.
History:
Prior to automation, operators of the BMS's would manually implement each stage of the testing and analysis processes, often utilizing multiple machines. Automation has helped to eliminate operator errors, allowing for more precise and repeatable data collection, and enabled faster design iteration on the part of respiratory protective device manufacturers.
Current manufacturers of ABMS's include: Ocenco (Kyriazi, 2011) and ATOR LABS.
History:
Relevant ISO Number The industry standard for Respiratory Protective Device (RPD) manufacturing is shifting towards the ISO 16900 series. This series provides scientific procedures and guidelines on how to standardize testing of RPD performance. The standard draws on decades of empirical experimentation and requires an ABMS that is capable of precise and repeatable measurement. As of 2018, U.S. testing houses have not widely conformed with the ISO 16900 standards. ABMS that are currently being used were produced in the 1980s. Research commissioned by NIOSH has shown that NIOSH–NPPTL conformity to the ISO 16900 standards would result in a one-time cost of $13.1 million. NIOSH – NPPTL compliance with the ISO 16900 standard would lead to better quality end product for all companies that apply through the government entity (Miller).
History:
ISO 17420 is also being developed as a standard for testing various RPD. The new standard looks at special applications for fire, escape, and special application other than fire services and escape. The last section includes the guidelines for CBRN respiratory protection devices. Ultimately, the ISO 17420 is expected to drive the prices of RPDs up due to the cost of new testing (Spasciani, 2012). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Inertial fusion power plant**
Inertial fusion power plant:
Inertial Fusion Energy is a proposed approach to building a nuclear fusion power plant based on performing inertial confinement fusion at industrial scale. This approach to fusion power is still in a research phase. ICF first developed shortly after the development of the laser in 1960, but was a classified US research program during its earliest years. In 1972, John Nuckolls wrote a paper predicting that compressing a target could create conditions where fusion reactions are chained together, a process known as fusion ignition or a burning plasma. On August 8, 2021, the NIF at Livermore National Laboratory became the first ICF facility in the world to demonstrate this (see plot). This breakthrough drove the US Department of Energy to create an Inertial Fusion Energy program in 2022 with a budget of 3 million dollars in its first year.
Design of a IFE power plant:
This kind of fusion reactor would consist of two parts: Targets which can be small capsules (<7 millimeter diameter) that contain fusion fuel. Although many kinds of targets have been tested including: cylinders, shells coated with nanotubes, solid blocks, hohlraum, glass shells filled with fusion fuel, cryogenically frozen targets, plastic shells, foam shells and materials suspended on spider silk.
Design of a IFE power plant:
Drivers which are used to compress and create a shock wave that squeezes the target. This compression wave pushes the material down to the temperature and pressure where fusion occurs. Drivers that have been explored are solid-state lasers, excimer lasers, high velocity solid objects, X-rays, beams of ions (heavy ion fusion (HIF)) and beams of electrons.Net energy in ICF comes from getting fusion reactions to chain together in a process known as ignition. To get there we need to squeeze material to hot and dense conditions for long enough. But a key problem is that after a plasma becomes hot - it becomes hard to compress. The goal then is to avoid getting material hot until after it is compressed. In literature, this is known as the low adiabatic approach to compression. These steps are outlined below: Keeping the plasma very cold, squeeze it together.
Design of a IFE power plant:
Heat the plasma only after it is squeezed; ideally inside a “hot spot”.
Fusion happens, and the resulting products deposit their energy creating more fusion.Several compression approaches attempt to do this including: Central Hot Spot Ignition, Fast Ignition, Shock Ignition and Magneto-inertial-fusion.
ICF Research Institutions:
This program was originally established as a way to develop Nuclear weapons, because ICF mimics the compression physics of a fission-fusion bomb. These facilities have been built around the world, below are some examples. Laser Mégajoule in France was developed in 2002 and upgraded in 2014.
Omega Laser was first built in 1992 at the University of Rochester.
Omega-EP was first built in 2008 at the University of Rochester as second more powerful laser beam.
Gecko Laser was first built at Osaka University in Japan in 1983 but has since been upgraded nearly a dozen times.
NIF was first operational in 2009 at the Livermore National Laboratory.
NIKE Laser was built at the Naval Research Laboratory to study excimer (gas-based) lasers.
Electra Laser was built at the Naval Research Laboratory to study excimer (gas-based) lasers.
PALS laser facility in the Czech Republic was established to research ICF laser implosions.
ICF Research Institutions:
Machine 3 was developed by First Light Fusion to accelerate blocks of material to create a shockwave on the target.There have also been multiple ICF facilities built, tested and decommissioned in the past. For example, Sandia National Laboratory pursued a series (<10 machines) of ion-beam and electron-beam driven ICF research program through the 1970s and into the middle 1980s. Alternatively, Los Alamos built a large, excimer laser facility called Aurora in the late 1980s. Livermore National Laboratory built a succession of laser facilities including Nova, Cyclops, 4-PI, SHIVA and other devices. As part of the run up to the NIF opening and achieving ignition, Livermore National Laboratory funded a body of research around the Laser Inertial Fusion Energy program. Under this program, a reactor design was developed, costing, reactor chambers and energy capture programs were explored.
IFE Research Programs:
IFE development has come in waves within the United States. Below are some government programs that have been funded over the years to push this technology forward: HAPL The high average laser program was administered by the Naval Research Laboratory from 1999 to 2008. This program doled out grants to target, laser and driver teams across the United States and organized 19 meetings between member organizations.
IFE Research Programs:
LIFE The Laser Inertial Fusion Energy program was administered by Livermore National Laboratory from 2008 to 2016. This program was funded to develop an IFE fusion power plant based around the National Ignition Facility.
SDI The Strategic Defense Initiative (SDI) inadvertently supported many of the IFE laser technologies seen today.
Driver Development:
It is still unclear which driver would work best for an IFE power plant, with supporters of different drivers pushing their favorite approach. Lasers have thus far proven to be the most well researched. Below is a summary of the laser drivers that have been studied. The challenge with implementing laser systems does not just come from the beam, but also the optics, mirrors, amplifiers and gratings that are also needed to put this system in place.
Driver Development:
Related Driver Technologies Depending on the driver that is being used there are key related technologies that need to be matured; below are some of these: Glass that can handle the laser energy (Joules) crossing through the glass cross section (meters^2) and not melt or get damaged. The glass is then used to make mirrors, lenses, gratings or windows inside the power plant.
Driver Development:
Amplifiers that can be used to increase the power of the laser beam.
Compressors that can compress the laser beam or ion beam in space and time to increase the overall on-target power.
Pulsed Power systems that can deliver the megajoules needed to either a laser, ion-beam or solid object driver. The workhorse of pulsed power (the Marx Generator) has limitations for an ICF plant and research has gone into Linear transformer driver as an alternative power source.
Laser Diodes are used as the first step in transferring electrical energy into light energy to initiate the laser beam. Such systems can be expensive and are not needed for excimer lasers.
Phase-Plate Smoothing is a technique to smooth out laser beams in solid-state laser systems.
Target Development:
There are many kinds of targets that have been developed for ICF research - but a power plant would require thousands if not millions of identical targets to be fired repeatedly. This will be exceedingly challenging. At present, the Department of Energy contracts with General Atomics to produce ICF targets for the national laboratories. These targets are partially built at GA and then shipped across the country to the ICF facility for a shot day. The Laboratories maintain hardware and staff onsite to complete the last steps to prepare the targets for a shot.
Target Development:
Target Example Glass Shell targets were spheres of glass on stalks and filled with DT gas; these were some of the earliest targets.
Overcoated targets involve growing chemical materials over a shell target. This can be done using directed Chemical Vapor Deposition of plastics or layers of gold or silvers.
Hohlraum targets are pellets of DT fusion fuel that are surrounded by tubes of gold foil. The laser strikes the foil and creates x-rays that compresses the targets; simulates nuclear weapons.
Silk Mounted Targets have been mounted on strands of spider silk; this material is the strongest material per cross-section known and maintains good characteristics down to cryogenic temperatures.
Cryogenic targets are those that must be kept below ~34 Kelvin to condense the hydrogen gas into liquid or ~14 kelvin to condense to a solid.
Foam Wetted targets are made using a variety of carbon-hydrogen foams and filled with liquid DT material cooled to below ~34 Kelvin.
Ice targets are made using a variety of carbon-hydrogen foams and filled with liquid DT material cooled to below ~14 Kelvin.
Target Development:
Cryogenic Targets There are several ways to get tritium and deuterium into an already-made capsule. High pressure fills work by putting the shells in a chamber with 1 to 100 Atm of gas pressure and having the gas diffuse into the shell. Cryogenic foam shells work can work by wicking in the liquid DT fluid into the foam. This involves getting the delicate shell down in temperature and pressure without damaging it. This is a stepwise process that can take hours to days in time and requires multiple containment chambers and various kinds of pumps. At cryogenic temperatures, the DT gas forms into a fluid which can be wicked into the foam shell. Once filled, operators slowly lower the temperature further to form the ice crystal. Ice can start formation around the equator of the target and then grow into a complete crystal. The ice is embedded with the foam shell structure. Engineers have had problems with ice cracking during this formation process – all of which impacts the performance of the shot. Monitoring of all of this is done using shadow grams, 360 X-ray diagnostics, visual inspection, and other tools; information is all run through software that gets a complete picture of the target during filling.
Target Development:
Moving Cryogenic Targets Keeping an ICF frozen at cryogenic temperatures while delivering it to the chamber for a shot is challenging. For example, at the Laboratory for Laser Energetics the frozen target is held inside a custom-built, mobile cryogenic cart that can be moved into position under the target chamber. The cart has a coolant system and vacuum pump to keep the material cold. This cart holds the frozen target at the of a "cold finger" which is then raised on an elevator and positioned at the center of the chamber. When the metal shroud is removed, the cryogenic target is exposed to room temperatures and starts to sublimate immediately into gas. This means that laser pulses must coordinate directly with the exposure of the target and everything has to happen quickly to keep the target from melting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**KAF-10500**
KAF-10500:
The Leica M8 is the first digital camera in the rangefinder M series introduced by Leica Camera AG on 14 September 2006. It uses an APS-H 10.3-megapixel CCD image sensor designed and made by Kodak.
As of 15 November 2014, the most recent firmware version is 2.024.
Features:
The M8 body is slightly thicker than the classic MP and M7 (approximately ~14% thicker). It is an all-metal body made of a high-strength magnesium alloy. The top and base plates are cut from brass billets, before receiving a black or silver chrome finish.
Features:
The M8 supports all existing Leica M-mount lenses, however some older models might not offer all the functions due to mismatching cams. All lenses are multiplied by a 1.33x crop factor, hence a 28mm lens will act approximately like a 35mm when mounted to the M8. Because the infrared filter over the sensor is relatively weak, adding an IR-cut filter in front of the lens is recommended. In addition, Leica chooses to omit the Anti-Alias filter, citing the reason for higher resolution power of the lens. However, the moiré artifacts can occur in scenes with closely spaced geometric patterns, such as fabric or mesh, distant buildings, balcony railings, corrugated roofing etc.The M8 uses modern metal-blade focal-plane shutter. It can fire flash synchronization at 1/250 second X-sync and has a top shutter speed of 1/8000 sec. The flash system used in the M8 is M-TTL.
Features:
The camera uses a 6-bit coding system that identifies the lens in use to the electronics built into M8 body. The code is included on all current Leica lenses. To prevent excessive vignetting due to closer lens mount than in a DSLR and thus higher light rays angle on the sensor periphery, offset micro-lenses are used on the CCD. The 6-bit code on lenses gives information about optic vignetting characteristics, permitting software adjustment. The M8 uses Adobe DNG as its raw data format and the raw converter Capture One LE (included with the camera).
Features:
KAF-10500 Sensor The KAF-10500 is a CCD imaging sensor designed by US photographic company Eastman Kodak. In September 2006 it was announced that the sensor was to be used in the M8 camera, having been specifically designed for this application. Its size is 18x27 mm (APS-H) and it has 10.3 million pixels of size 6.8 μm. Compared to 35mm film, it has a 1.33 crop factor. It is calibrated for an ISO sensitivity range of 160–2500.
Features:
The sensor includes indium tin oxide as a constituent material, which Kodak claims leads to low noise, high sensitivity, and wide dynamic range. It is designed for use with lenses with short back focal lengths – such as those common to rangefinder cameras – by including a microlens array to reduce fall off in intensity from the center to corners of the image. Further details and aspects of the sensor were unveiled during the course of photokina 2006.
Reception:
The Leica M8 suffered from some controversy on its release due to image quality problems reported by some users, especially an extremely high sensitivity to infrared light, which made black colors appear purple. Leica has since released a statement saying that it will send two free special UV/IR screw-on photographic filters to all future M8 purchasers, and upon request for all current M8 users. Users experiencing other image quality problems can apply to return their M8 for repair.However, this sensitivity to Infrared light has inspired a niche of photographers who use photographic filters that block the visible spectrum of light to do infrared photography.
Upgrade program:
Leica announced a perpetual upgrade program on 31 January 2008. To keep a user's M8 up to date with newer releases, owners can send their M8 to Leica for upgrades. The first upgrade offered under this program is an improved shutter designed for quieter operation, at the cost of a slower maximum shutter speed of 1/4000sec. Leica subsequently announced additional upgrades: Sapphire glass LCD cover More accurate bright line frames
Leica M8.2:
Leica announced the Leica M8.2 on 15 September 2008. The Leica M8.2 includes all the upgrades offered in the upgrade program, however the black version is coated with black paint (as opposed to the black chrome finish of the standard Leica M8) and black Leica branding dot. An auto S setting producing only JPEGs was added to the shutter speed dial.
Leica M8.2:
Leica also introduced the M8.2 Safari edition package, limited to a production run of 500. The package includes an olive green painted Leica M8.2, a silver-finished Leica Elmarit-M 28mm f/2.8 ASPH lens and a matching Billingham camera case. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Livestock crush**
Livestock crush:
A cattle crush (in UK, New Zealand, Ireland, Botswana and Australia), squeeze chute (North America), cattle chute (North America), standing stock, or simply stock (North America, Ireland) is a strongly built stall or cage for holding cattle, horses, or other livestock safely while they are examined, marked, or given veterinary treatment. Cows may be made to suckle calves in a crush. For the safety of the animal and the people attending it, a close-fitting crush may be used to ensure the animal stands "stock still". The overall purpose of a crush is to hold an animal still to minimise the risk of injury to both the animal and the operator while work on the animal is performed.
Construction:
Crushes were traditionally manufactured from wood; this, however, was prone to deterioration from the elements over time, as well as having the potential to splinter and cause injury to the animal. In recent years, most budget-quality crushes have been built using standard heavy steel pipe that is welded together, while superior quality crushes are now manufactured using doubly symmetric oval tubing for increasing bending strength, bruise minimisation and stiffness in stockyard applications. In Australia, the steel itself should ideally be manufactured to High Tensile Grade 350LO - 450LO and conform to Australian Standards AS 1163 for structural steel.Cattle crushes may be fully fixed or mobile; however, most crushes are best classified as semipermanent, being potentially movable but designed to primarily stay in one place. A cattle crush is typically linked to a cattle race (also known as an alley). The front end has a head bail (or neck yoke or head gate) to catch the animal and may have a baulk gate that swings aside to assist in catching the beast. The bail is often adjustable to accommodate animals of different sizes. This bail may incorporate a chin or neck bar to hold the animal's head still. A side lever operates the head bail to capture the animals, with the better types having a rear drop-away safety lever for easier movement of the cattle into the bail. Usually, smaller animals can walk through the head bails incorporated in crushes.
Construction:
Lower side panels and/or gates of sheet metal, timber or conveyor belting are used in some cases to ensure animals' legs do not get caught and reduce the likelihood of operator injury. At least one side gate is usually split to allow access to various parts of the animal being held, as well as providing access to feed a calf, amongst other things. A squeeze crush has a manual or hydraulic mechanism to squeeze the animal from the sides, immobilizing the animal while keeping bruising to a minimum. A sliding entrance gate, operated from the side of the crush, is set a few feet behind the captured animal to allow for clearance and prevent other animals entering. Crushes will, in many cases, have a single or split veterinary gate that swings behind the animal to improve operator safety, while preventing the animal from moving backwards by a horizontal rump bar inserted just behind its haunches into one of a series of slots. If this arrangement is absent, a palpation cage can be added to the crush for veterinary use when artificial insemination or pregnancy testing is being performed, or for other uses. Older crushes can also be found to have a guillotine gate that is also operated from the side via rope or chain where the gate is raised up for the animal to go under upon entering the crush, and then let down behind the animal.
Construction:
A crush is a permanent fixture in slaughterhouses, because the animal is carried on a conveyor restrainer under its belly, with its legs dangling in a slot on either side. Carried in this manner, the animal is unable to move either forward or backward by its own volition.Some mobile crushes are equipped with a set of wheels so they can be towed from yard to yard. A few of these portable crushes are built so the crush may also be used as a portable loading ramp. A mobile crush must incorporate a strong floor, to prevent the animal moving it by walking along the ground.
Construction:
Crushes vary in sophistication, according to requirements and cost. The simplest are just a part of a cattle race (alley) with a suitable head bail. More complex ones incorporate features such as automatic catching systems, hatches (to gain access to various parts of the animal), winches (to raise the feet or the whole animal), constricting sides to hold the animal firmly (normal in North American slaughterhouses), a rocking floor to prevent kicking or a weighing mechanism.
Specialist crushes:
Specialist crushes are made for various purposes. For example, those designed for cattle with very long horns (such as Highland cattle or Texas Longhorn cattle) are low-sided or very wide, to avoid damage to the horns. Other specialist crushes include those for tasks such as automatic scanning, foot-trimming or clipping the hair under the belly, and smaller crushes (calf cradles) for calves.
Specialist crushes:
Standing stocks for cattle and horses are more commonly stand-alone units, not connected to races (alleys) except for handling animals not accustomed to being handled. These stand-alone units may be permanent or portable. Some portable units disassemble for transport to shows and sales. These units are used during grooming and also with veterinary procedures performed with the animal standing, especially if it requires heavy sedation, or to permit surgery under sedation rather than general anesthesia. For some surgical procedures, this is reported to be efficient. These units also are used during some procedures that require a horse to stand still, but without sedation.There are two different types of specialised crushes used in rodeo arenas. Those for the "rough stock" events, such as bronc riding and bull riding, are known as bucking chutes or rough-riding chutes. For events such as steer roping, the crush is called a roping chute. The rough-riding chutes are notably higher in order to hold horses and adult bulls, and have platforms and rail spacing that allows riders and assistants to access the animal from above. These chutes release the animal and the rider through a side gate. A roping chute is large enough to contain a steer of the size used in steer wrestling and may also have a seat above the chute for an operator. The steer or calf is released through the front of the chute.
Hoof trimming crush:
A hoof trimming crush, also called a hoof trimming chute or hoof trimming stalls, is a crush specifically designed for the task of caring for cattle hooves, specifically trimming excess hoof material and cleaning. Such crushes range from simple standing frameworks to highly complex fixed or portable devices where much or all of the process is mechanised. Many standard crushes now come with optional fitting kits to add to a non-foot trimming crush.
Integrated weighing systems:
In recent years, crushes are often integrated with weighing systems. The crush provides the ideal opportunity to weigh and measure the animal while it is safely contained within the unit.
History:
Many cattle producers managed herds with nothing more than a race (alley) and a headgate (or a rope) until tagging requirements and disease control necessitated the installation of crushes.In the past the principal use of the crush, in England also known as a trevis, was for the shoeing of oxen. Crushes were, and in places still are, used for this purpose in North America and in many European countries. They were usually stand-alone constructions of heavy timbers or stone columns and beams. Some crushes were simple, without a head bail or yoke, while others had more sophisticated restraints and mechanisms; a common feature is a belly sling which allows the animal to be partly or wholly raised from the ground. In Spain, the crush was a village community resource and is called potro de herrar, or "shoeing frame". In France it is called travail à ferrer (plural travails, not travaux) or "shoeing trevis", and was associated with blacksmith shops. Although the word travail derives from Latin tripalium, "three beams", all surviving examples but that at Roissard have four columns. In central Italy it is called a travaglio, but in Sardinia is referred to as Sardinian: sa macchina po ferrai is boisi, or "the machine for shoeing the oxen". In the United States it was called an ox sling, an ox press or shoeing stalls. In some countries, including the Netherlands and France, horses were commonly shod in the same structures. In the United States similar but smaller structures, usually called horse shoeing stocks, are still in use, primarily to assist farriers in supporting the weight of the horse's hoof and leg when shoeing draft horses. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ubiquitin-conjugating enzyme**
Ubiquitin-conjugating enzyme:
Ubiquitin-conjugating enzymes, also known as E2 enzymes and more rarely as ubiquitin-carrier enzymes, perform the second step in the ubiquitination reaction that targets a protein for degradation via the proteasome. The ubiquitination process covalently attaches ubiquitin, a short protein of 76 amino acids, to a lysine residue on the target protein. Once a protein has been tagged with one ubiquitin molecule, additional rounds of ubiquitination form a polyubiquitin chain that is recognized by the proteasome's 19S regulatory particle, triggering the ATP-dependent unfolding of the target protein that allows passage into the proteasome's 20S core particle, where proteases degrade the target into short peptide fragments for recycling by the cell.
Relationships:
A ubiquitin-activating enzyme, or E1, first activates the ubiquitin by covalently attaching the molecule to its active site cysteine residue. The activated ubiquitin is then transferred to an E2 cysteine. Once conjugated to ubiquitin, the E2 molecule binds one of several ubiquitin ligases or E3s via a structurally conserved binding region. The E3 molecule is responsible for binding the target protein substrate and transferring the ubiquitin from the E2 cysteine to a lysine residue on the target protein.
Relationships:
A particular cell usually contains only a few types of E1 molecule, a greater diversity of E2s, and a very large variety of E3s. In humans, there are about 30 E2s which can bind with one of the 600+ E3s. The E3 molecules responsible for substrate identification and binding are thus the mechanisms of substrate specificity in proteasomal degradation. Each type of E2 can associate with many E3s.E2s can also be used to study protein folding mechanisms. Since the ubiquitylation system is shared across all organisms, studies can use modified E2 proteins in order to understand the overall system for how all organisms process proteins. There are also some proteins which can act as both and E2 and an E3 containing domains which cover both E2 and E3 functionality.
Isozymes:
The following human genes encode ubiquitin-conjugating enzymes: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Corel Photo-Paint**
Corel Photo-Paint:
Corel Photo-Paint is a raster graphics editor developed and marketed by Corel since 1992. Corel markets the software for Windows and Mac OS operating systems, previously having marketed versions for Linux (Version 9, requiring Wine). Its primary market competitor is Adobe Photoshop.
In 2006, Corel released version 13 as Photo-Paint X3, employing this naming convention for subsequent releases as well as for CorelDraw, included with Photo-Paint in CorelDraw Graphics Suite. The current version is Photo-Paint 2020. Corel has marketed a limited edition of Photo-Paint called Corel Photo-Paint SELECT with HP scanning hardware, e.g., the HP ScanJet 5p scanners.
Features:
Photo-Paint's native format is .CPT (Corel Photo-Paint Image), which stores image data as well as information within an image, including objects (layers in some raster editors), colour profiles, text, transparency, effect filters.
Features:
The program can open and convert vector formats from CorelDraw and Adobe Illustrator and can open other formats, including PNG, JPG and GIF files — as well as competing photo editor formats from Photoshop, GIMP and Paint Shop Pro (the latter also a Corel product). The program also supports plug-in functionality including those developed for Adobe Photoshop and Paint Shop Pro. Other extensions such as brushes are also compatible with Photo-Paint.Corel Photo-Paint X6–X7 supports OpenType font features. With X7 Update 4 the Font List new additional features in X7 Update 4 allows for filtering type fonts by weight, width, supported scripts, font Technology, Character Range, Style.As other raster graphics editors, Corel Photo-Paint allows an image to be edited in multiple layers, called objects here. A gradient line going from opaque to transparent, for instance, can be used to have a darker foreground color fade into a lighter background color. The UI is highly customizable, and the user can freely move dialogs or adjust button sizes and such. Effects can be applied to a picture including Smart Blur—a type of Gaussian blur effect which however retains sharpness around sharper edges—Mesh Warp, Camera Lens Flare, Trace Contour and others. There is limited support for vector paths to be integrated. Depending on personal preferences and work style, users may prefer Corel Photo-Paint over Adobe Photoshop or the other way round, though in terms of market share, Photoshop is clearly more represented.As a component of the CorelDraw Graphics Suite, Photo-Paint can exchange data with other programs in the suite, including Corel Connect (Version X5 - X7), which enables users to share files between different computer software and drives on the user's computer. CorelDraw and Photo-Paint are also copy-paste compatible, with format and effects retention — and without file conversion.Just like in CorelDraw, Photo Paint tasks can be automatized by scripts and macros, using both COREL Script and Microsoft's VBA (Visual Basic for Applications) and VSTA (Visual Studio Tools for Applications). Corel calls the smaller macros created with COREL Script "Scripts", and the scripts created with the Microsoft tools "macros". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Richard William Byrne**
Richard William Byrne:
Richard William Byrne is an Emeritus Professor in the School of Psychology and Neuroscience of the University of St Andrews.With an h-index of 77, he is renowned in the area of the evolution of cognitive and social behavior such as machiavellian intelligence.
Selected research:
Townsend, S.W., Koski, S.E., Byrne, R.W., Slocombe, K.E., Bickel, B., Boeckle, M., Braga Goncalves, I., Burkart, J.M., Flower, T., Gaunet, F. and Glock, H.J., 2017. Exorcising G rice's ghost: An empirical approach to studying intentional communication in animals. Biological Reviews, 92(3), pp. 1427–1433.
Hobaiter, C. and Byrne, R.W., 2014. The meanings of chimpanzee gestures. Current Biology, 24(14), pp. 1596–1600.
Hobaiter, C. and Byrne, R.W., 2011. Serial gesturing by wild chimpanzees: its nature and function for communication. Animal cognition, 14(6), pp. 827–838.
Hobaiter, C. and Byrne, R.W., 2011. The gestural repertoire of the wild chimpanzee. Animal cognition, 14(5), pp. 745–767.
Whiten, A. and Byrne, R.W., 1988. Taking (Machiavellian) intelligence apart. Clarendon Press/Oxford University Press. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cardiac examination**
Cardiac examination:
In medicine, the cardiac examination, also precordial exam, is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination.
Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation.
Positioning:
The patient is positioned in the supine position tilted up at 45 degrees if the patient can tolerate this. The head should rest on a pillow and the arms by their sides. The level of the jugular venous pressure (JVP) should only be commented on in this position as flatter or steeper angles lead to artificially elevated or reduced level respectively. Also, left ventricular failure leads to pulmonary edema which increases and may impede breathing if the patient is laid flat.
Positioning:
Lighting should be adjusted so that it is not obscured by the examiner who will approach from the right hand side of the patient as is medical custom.
The torso and neck should be fully exposed and access should be available to the legs.
Inspection:
General Inspection: Inspect the patient status whether he or she is comfortable at rest or obviously short of breath.
Inspect the neck for increased jugular venous pressure (JVP) or abnormal waves.
Any abnormal movements such as head bobbing.
Inspection:
There are specific signs associated with cardiac illness and abnormality however, during inspection any noticed cutaneous sign should be noted.Inspect the hands for: Temperature – described as warm or cool, clammy or dry Skin turgor for hydration Janeway lesion Osler's node At the nails Splinter hemorrhage and Quincke's pulsation should be looked for as well as any deformity of the nail such as Beau's lines, clubbing or peripheral cyanosis.Inspect the head for: Cheeks for the malar flush of mitral stenosis.
Inspection:
The eyes for corneal arcus and surrounding tissue for xanthalasma.
Conjunctiva pallor a sign of anemia.
The mouth for hygiene.
The mucosa for hydration and pallor or central cyanosis.
The ear lobes for Frank's sign.Then inspect the precordium for: visible pulsations apex beat masses scars lesions signs of trauma and previous surgery (e.g. median sternotomy) permanent Pace Maker praecordial bulge
Palpation:
The pulses should be palpated, first the radial pulse commenting on rate and rhythm then the brachial pulse commenting on character and finally the carotid pulse again for character.
The pulses may be: Bounding as in large pulse pressure found in aortic regurgitation or CO2 retention.
And the rhythm should be assessed as regular, regularly irregular or irregularly irregular.
Consistency of the strength to assess for Pulsus alternans.
Slow rising as found in aortic stenosis known as parvus et tardus Jerky as found in HOCM Pulses can also be auscultated for features like Traube's pistol shot femoral pulse.
Palpation of the precordium The valve areas are palpated for abnormal pulsations (palpable heart murmurs known as thrills) and precordial movements (known as heaves). Heaves are best felt with the heel of the hand at the sternal border.
Palpation:
Palpation of the apex beat The apex beat is found approximately in the fifth left intercostal space in the mid-clavicular line. It can be impalpable for a variety of reasons including obesity, emphysema, effusion and rarely dextrocardia. The apex beat is assessed for size, amplitude, location, impulse and duration. There are specific terms to describe the sensation such as tapping, heaving and thrusting.
Palpation:
Often the apex beat is felt diffusely over a large area, in this case the most inferior and lateral position it can be felt in should be described as well as the location of the largest amplitude.
Finally the sacrum and ankles are checked for pitting edema which is caused by right ventricular failure in isolation or as part of congestive cardiac failure.
Auscultation:
One should comment on S1 and S2 – if the splitting is abnormal or louder than usual.
S3 – the emphasis and timing of the syllables in the word Kentucky is similar to the pattern of sounds in a precordial S3.
S4 – the emphasis and timing of the syllables in the word Tennessee is similar to the pattern of sounds in a precordial S4.
If S4 S1 S2 S3 Also known as a gallop rhythm.
diastolic murmurs (e.g. aortic regurgitation, mitral stenosis) systolic murmurs (e.g. aortic stenosis, mitral regurgitation) pericardial rub (suggestive of pericarditis) The base of the lungs should be auscultated for signs of pulmonary oedema due to a cardiac cause such as bilateral basal crepitations.
Completion of examination:
To complete the exam blood pressure should be checked, an ECG recorded, funduscopy performed to assess for Roth spots or papilledema. A full peripheral circulation exam should be performed. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nordion**
Nordion:
Nordion Inc., a Sotera Health company, is a health science company that provides Cobalt-60 used for sterilization and treatment of disease (radiotherapy). Nordion is headquartered in Ottawa, Ontario, Canada, with facilities in Vancouver, British Columbia and Laval, Quebec. Kevin Brooks is the company's CEO. It was acquired by Sotera Health in 2014 for US$805 million (equivalent to $919.32 million in 2022).
History:
Founded in 1946, originally the radium sales department of Eldorado Mining and Refining Ltd., the division developed one of the first teletherapy units that used the radioisotope cobalt-60 to destroy cancerous tumours.
Soon after, the division was given responsibility for selling radioisotopes produced by the newly established Chalk River Nuclear Laboratories, a nuclear research facility at Chalk River, Ontario. As a result, in 1951, Eldorado established a commercial products division (CPD) to manage the isotope business, especially cobalt-60 used in cancer treatment.
In 1952, the federal government created Atomic Energy of Canada Limited (AECL), a Crown corporation. Shortly thereafter, CPD was transferred to AECL, where it remained for the next 40 years and was renamed the radio-chemical division.
In 1988, ownership of the radio-chemical division was transferred from AECL to the Canadian Development Investment Corporation (CDIC). The company assumed a new name, Nordion International Inc. and was later sold to MDS Health Group in 1991.
History:
In 2010, MDS Inc. completed a strategic repositioning which saw the Company divest its MDS Analytical Technologies and MDS Pharma Services businesses. Also in 2010, shareholders of MDS Inc. approved a change of name from MDS Inc. to Nordion Inc. The Company officially changed its name to Nordion Inc. on November 1, 2010.In July 2013, Nordion completed the divestiture of its Targeted Therapies business to BTG plc. The company is now focused on Nordion and its sterilization technologies and medical isotopes businesses.
History:
The company generated US$244.8 million USD (equivalent to $279.57 million in 2022) in revenues in the 2012 fiscal year, with over 70% of its revenue coming from within North America.It was acquired by Sotera Health in 2014 for US$805 million (equivalent to $919.32 million in 2022). The company's press release stated that the acquisition created "the only vertically integrated sterilization company in the world."Nordion sold its Medical Isotopes business in 2018.
Products:
Gamma technologies Customers use Nordion's gamma-sterilization technologies to sterilize medical surgical supplies and devices, as well as certain consumer products, such as food and cosmetics.
Nordion supplies cobalt-60, the isotope that produces the gamma radiation required to destroy harmful micro-organisms. The company also designs and sells a family of production irradiators.
Locations:
The Nordion corporate headquarters are located in Ottawa, Ontario, Canada. The headquarters are the main manufacturing facilities for medical isotopes, used in medical imaging and radiopharmaceuticals, and for cobalt-60 sources and industrial food irradiators.
The Nordion Gamma Centre of Excellence (GCE) is a gamma irradiation research, training, and demonstration facility located in Laval, Quebec, Canada. The GCE is operated in partnership with the University of Quebec's Armand Frappier Institute.
Nordion has an Asia Pacific Sales Office in Hong Kong. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.