text
stringlengths
60
353k
source
stringclasses
2 values
**Greenfish recirculation technology** Greenfish recirculation technology: Developed in Sweden, the Greenfish recirculation technology is a water purification technology for sustainable aquaculture production in closed indoor freshwater systems. It was developed at Gothenburg University by Björn Lindén in collaboration with Chalmers associate professor Torsten Wik, under the supervision of professor emeritus Gustaf Olsson at Lund University of Technology. Greenfish recirculation technology: Several published articles , , , have appeared as well as verification of the system in full-scale farming operations with wet feed and semi-moist fish feed. One of the most important describes the advanced simulator for full-scale recirculation in an aquaculture system with algorithms for complete mass balances calculations, involving: growth of fish, addition of fish feeds, production of waste, bacterial growth and the dynamics of the water purification system. Greenfish recirculation technology: In the system no less than 28 different parameters of bacterial substrates are described to simulate the water purification dynamics of the system. The microbial scientific basics and water purification technology and engineering rests on formidable scientific knowledge, as can be followed in further references , , , , , , , , , , , , , , , , , .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jakarta Activation** Jakarta Activation: Within computing, Jakarta Activation (JAF; formerly JavaBeans Activation Framework) is a Jakarta EE API that enables developers to: determine the type of an arbitrary piece of data, encapsulate access to it, discover the operations available on it and to instantiate the appropriate bean to perform the operation(s).It also enables developers to dynamically register types of arbitrary data and actions associated with particular kinds of data. Additionally, it enables a program to dynamically provide or retrieve JavaBeans that implement actions associated with some kind of data. Originally an extension API, it was available as a standard API in Java SE (from Java SE 6 on) and Java EE, but was removed in Java SE 11. DataSource Interface: Provides access to an arbitrary collection of data Get name of the data, data-type name (content type), and the data itself as Input Stream or Output Stream Two implementation classes provided URLDataSource simplifies the handling of data described by URLs FileDataSource simple DataSource object that encapsulates a file provides data typing services -> delegated to a FileTypeMap object. Other implementations javax.mail.internet.MimePartDataSource javax.mail.util.ByteArrayDataSource DataContentHandler interface: Convert the object to a byte stream and write it to the output stream Convert streams in to objects Used to get object/data which can be transferred Uses java.awt.datatransfer.DataFlavor to indicate the data that can be accessed. DataFlavor is a data format as would appear on a clipboard, during drag and drop, or in a file system. CommandMap class: An abstract class provides an interface to a registry of command objects available in the system Developer develop their own implementation or use MailcapCommandMap class that implements a CommandMap whose configuration is based on mailcap files (1524) Command list available from a MIME Type is stored in CommandInfo object. CommandObject interface: Interface to be implemented by JavaBeans components that are ActivationFramework aware Simple interface with one method: setCommandContext(String verb, DataHandler dh)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhombic icosahedron** Rhombic icosahedron: The rhombic icosahedron is a polyhedron shaped like an oblate sphere. Its 20 faces are congruent golden rhombi; 3, 4, or 5 faces meet at each vertex. It has 5 faces (green on top figure) meeting at each of its 2 poles; these 2 vertices lie on its axis of 5-fold symmetry, which is perpendicular to 5 axes of 2-fold symmetry through the midpoints of opposite equatorial edges (example on top figure: most left-hand and most right-hand mid-edges). Its other 10 faces follow its equator, 5 above and 5 below it; each of these 10 rhombi has 2 of its 4 sides lying on this zig-zag skew decagon equator. The rhombic icosahedron has 22 vertices. It has D5d, [2+,10], (2*5) symmetry group, of order 20; thus it has a center of symmetry (since 5 is odd). Rhombic icosahedron: Even though all its faces are congruent, the rhombic icosahedron is not face-transitive, since one can distinguish whether a particular face is near the equator or near a pole by examining the types of vertices surrounding this face. Zonohedron: The rhombic icosahedron is a zonohedron, that is dual to a pentagonal gyrobicupola with regular triangular, regular pentagonal, but irregular quadrilateral faces. The rhombic icosahedron has 5 sets of 8 parallel edges, described as 85 belts. The rhombic icosahedron forms the convex hull of the vertex-first projection of a 5-cube to 3 dimensions. The 32 vertices of a 5-cube map into the 22 exterior vertices of the rhombic icosahedron, with the remaining 10 interior vertices forming a pentagonal antiprism. In the same way, one can obtain a Bilinski dodecahedron from a 4-cube, and a rhombic triacontahedron from a 6-cube. Related polyhedra: The rhombic icosahedron can be derived from the rhombic triacontahedron by removing a belt of 10 middle faces. (*) (For example, on the left-hand figure): The orthogonal projection of the (vertical) belt of 10 middle faces of the rhombic triacontahedron is just the (horizontal) exterior regular decagon of the common orthogonal projection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Google Neural Machine Translation** Google Neural Machine Translation: Google Neural Machine Translation (GNMT) is a neural machine translation (NMT) system developed by Google and introduced in November 2016 that uses an artificial neural network to increase fluency and accuracy in Google Translate. The neural network consists of two main blocks, an encoder and a decoder, both of LSTM architecture with 8 1024-wide layers each and a simple 1-layer 1024-wide feedforward attention mechanism connecting them. The total number of parameters has been variously described as over 160 million, approximately 210 million, 278 million or 380 million.GNMT improves on the quality of translation by applying an example-based (EBMT) machine translation method in which the system learns from millions of examples of language translation. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. With the large end-to-end framework, the system learns over time to create better, more natural translations. GNMT attempts to translate whole sentences at a time, rather than just piece by piece. The GNMT network can undertake interlingual machine translation by encoding the semantics of the sentence, rather than by memorizing phrase-to-phrase translations. History: The Google Brain project was established in 2011 in the "secretive Google X research lab" by Google Fellow Jeff Dean, Google Researcher Greg Corrado, and Stanford University Computer Science professor Andrew Ng. Ng's work has led to some of the biggest breakthroughs at Google and Stanford.In November 2016, Google Neural Machine Translation system (GNMT) was introduced. Since then, Google Translate began using neural machine translation (NMT) in preference to its previous statistical methods (SMT) which had been used since October 2007, with its proprietary, in-house SMT technology.Training GNMT was a big effort at the time and took, by a 2021 OpenAI estimate, on the order of 100 PFLOP/s*day (up to 1022 FLOPs) of compute which was 1.5 orders of magnitude larger than Seq2seq model of 2014 (but about 2x smaller than GPT-J-6B in 2021). History: Google Translate's NMT system uses a large artificial neural network capable of deep learning. By using millions of examples, GNMT improves the quality of translation, using broader context to deduce the most relevant translation. The result is then rearranged and adapted to approach grammatically based human language. GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate. GNMT did not create its own universal interlingua but rather aimed at finding the commonality between many languages using insights from psychology and linguistics. The new translation engine was first enabled for eight languages: to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish in November 2016. In March 2017, three additional languages were enabled: Russian, Hindi and Vietnamese along with Thai for which support was added later. Support for Hebrew and Arabic was also added with help from the Google Translate Community in the same month. In mid April 2017 Google Netherlands announced support for Dutch and other European languages related to English. Further support was added for nine Indian languages: Hindi, Bengali, Marathi, Gujarati, Punjabi, Tamil, Telugu, Malayalam and Kannada at the end of April 2017. Evaluation: The GNMT system is said to represent an improvement over the former Google Translate in that it will be able to handle "zero-shot translation", that is it directly translates one language into another (for example, Japanese to Korean). Google Translate previously first translated the source language into English and then translated the English into the target language rather than translating directly from one language to another.A July 2019 study in Annals of Internal Medicine found that "Google Translate is a viable, accurate tool for translating non–English-language trials". Only one disagreement between reviewers reading machine-translated trials was due to a translation error. Since many medical studies are excluded from systematic reviews because the reviewers do not understand the language, GNMT has the potential to reduce bias and improve accuracy in such reviews. Languages supported by GNMT: As of December 2021, all of the languages of Google Translate support GNMT, with Latin being the most recent addition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stolen base** Stolen base: In baseball, a stolen base occurs when a runner advances to a base unaided by other actions and the official scorer rules that the advance should be credited to the action of the runner. The umpires determine whether the runner is safe or out at the next base, but the official scorer rules on the question of credit or blame for the advance under Rule 10 (Rules of Scoring) of the MLB's Official Rules.A stolen base most often occurs when a base runner advances to the next base while the pitcher is pitching the ball to home plate. Stolen base: Successful base stealers must be fast and have good timing. Background: Ned Cuthbert, playing for the Philadelphia Keystones in either 1863 or 1865, was the first player to steal a base in a baseball game, although the term stolen base was not used until 1870. For a time in the 19th century, stolen bases were credited when a baserunner reached an extra base on a base hit from another player. For example, if a runner on first base reached third base on a single, it counted as a steal. In 1887, Hugh Nicol set a still-standing Major League record with 138 stolen bases, many of which would not have counted under modern rules. Modern steal rules were fully implemented in 1898. Background: Base stealing was popular in the game's early decades, with speedsters such as Ty Cobb and Clyde Milan stealing nearly 100 bases in a season. But the tactic fell into relative disuse after Babe Ruth introduced the era of the home run – in 1955, for example, no one in baseball stole more than 25 bases, and Dom DiMaggio won the AL stolen base title in 1950 with just 15. However, in the late 1950s and early 1960s, base-stealing was brought back to prominence primarily by Luis Aparicio and Maury Wills, who broke Cobb's modern single-season record by stealing 104 bases in 1962. Wills’ record was broken in turn by Lou Brock in 1974 and Rickey Henderson in 1982. The stolen base remained a popular tactic through the 1980s, perhaps best exemplified by Vince Coleman and the St. Louis Cardinals, but began to decline again in the 1990s as the frequency of home runs reached record heights and the steal-friendly artificial turf ballparks began to disappear. Background: Base stealing is an important characteristic of the "small ball" managing style (or "manufacturing runs"). Such managers emphasize "doing the little things" (including risky running plays like base-stealing) to advance runners and score runs, often relying on pitching and defense to keep games close. The Los Angeles Dodgers of the 1960s, led by pitcher Sandy Koufax and speedy shortstop Maury Wills, were a successful example of this style. The antithesis of this is reliance on power hitting, exemplified by the Baltimore Orioles of the 1970s, which aspired to score most of its runs via home runs. Often the "small ball" model is associated with the National League, while power hitting is associated with the American League. However, some successful recent American League teams, including the 2002 Anaheim Angels, the 2001 Seattle Mariners the 2005 Chicago White Sox, and the 2015 Kansas City Royals have excelled at "small ball." The Royals in particular embodied this style within the last decade, leading the league in stolen bases but finishing last in home runs in 2013 and 2014, leading to a berth in two consecutive World Series, one of which they won. Successful teams often combine both styles, with speedy runners complementing power hitters—such as the 2005 White Sox, who hit 200 home runs, which was fifth most in the majors, and had 137 stolen bases, which was fourth. Base-stealing technique: Baseball's Rule 8 (The Pitcher) specifies the pitching procedure in detail. For example, in the Set Position, the pitcher must "com[e] to a complete stop"; thereafter, "any natural motion associated with his delivery of the ball to the batter commits him to the pitch without alteration or interruption." A runner intending to "steal on the pitcher" breaks for the next base the moment the pitcher commits to pitch to home plate. The pitcher cannot abort the pitch and try to put the runner out; this is a balk under Rule 8. Base-stealing technique: If the runner breaks too soon (before the pitcher is obliged to complete a pitch), the pitcher may throw to a base rather than pitch, and the runner is usually picked off by being tagged out between the bases. Past this moment, any delay in the runner's break makes it more likely that the catcher, after receiving the pitch, will be able to throw the runner out at the destination base. Base-stealing technique: Before the pitch, the runner takes a lead, walking several steps away from the base as a head start toward the next base. Even a runner who does not intend to steal takes a secondary lead of a few more steps, once the pitcher has legally committed to complete the pitch. Base-stealing technique: The pitcher may throw to the runner's base. The runner must return to that base or risk being tagged out. As well as putting the runner out, an underlying goal is to dissuade the runner from too big a lead; that is, to hold the runner on the original base. (Historically, this gambit could be used without limit. An MLB rules change in 2023 limited the pitcher to two throws; the pitcher must then pitch to the batter.) The more adept base stealers are proficient at reading the pickoff, meaning that they can detect certain tells (tell-tale signs) in a pitcher's pre-pitch movements or mannerisms that indicate the pickoff attempt is or is not imminent. For example, one experienced base stealer noted that careless pitchers dig the toes on their back foot into the ground when they are about to pitch in order to get a better push off, but when they intend to turn and throw a pickoff, they do not.If a batted ball is caught on the fly, the runner must return to his original base. In this case, a runner trying to steal is more likely to be caught off his original base, resulting in a double play. This is a minor risk of a steal attempt. It is offset by the fact that a ground ball double play is less likely. Plays involving baserunning: In the hit-and-run play, coaches coordinate the actions of runner and batter. The runner tries to steal and the batter swings at almost any pitch, if only to distract the catcher. If the batter makes contact, the runner has a greater chance of reaching the next base; if the batter gets a base hit, the runner will likely be able to take an extra base. If the batter fails to hit the ball, the hit-and-run becomes a pure steal attempt. Plays involving baserunning: The less common cousin to the hit and run is the "run and hit" play. In the run and hit, the base runner attempts to advance when the pitcher commits the pitch to home plate, but the batter is instead directed to exercise his judgement as to whether or not to swing at the pitch. If the batter feels it is not advantageous to swing, AND he believes the base runner is very likely to succeed in the steal attempt, he does not swing. This play is typically utilized with elite base stealers and skilled batters only, wherein a highly experienced batsman is trusted to decide whether or not to "protect" the base runner. If the batter chooses not to swing, it becomes a pure steal attempt. Plays involving baserunning: In the delayed steal, the runner does not take advantage of the pitcher's duty to complete a pitch, but relies on surprise and takes advantage of any complacency by the fielders. The runner gives the impression he is not trying to steal, and does not break for the next base until the ball crosses the plate. It is rare for Major League defenses to be fooled, but the play is used effectively at the college level. The first delayed steal on record was performed by Miller Huggins in 1903. The delayed steal was famously practiced by Eddie Stanky of the Brooklyn Dodgers. Plays involving baserunning: Second base is the base most often stolen, because once a runner is on second base he is considered to be in scoring position, meaning that he is expected to be able to run home and score on most routine singles hit into the outfield. Second base is also the easiest to steal, as it is farthest from home plate and thus a longer throw from the catcher is required to prevent it. Third base is a shorter throw for the catcher, but the runner is able to take a longer lead off second base and can leave for third base earlier against a left-handed pitcher. A steal of home plate is the riskiest, as the catcher only needs to tag out the runner after receiving the ball from the pitcher. It is difficult for the runner to cover the distance between the bases before the ball arrives home. Ty Cobb holds the records for most steals of home in a single season (8) as well as for a career (54). Steals of home are not officially recorded statistics, and must be researched through individual game accounts. Thus Cobb's totals may be even greater than is recorded. Jackie Robinson famously stole home in Game 1 of the 1955 World Series. Thirty-five games have ended with a runner stealing home, but only two have occurred since 1980. In a variation on the steal of home, the batter is signaled to simultaneously execute a sacrifice bunt, which results in the squeeze play. The suicide squeeze is a squeeze in which the runner on third begins to steal home without seeing the outcome of the bunt; it is so named because if the batter fails to bunt, the runner will surely be out. In contrast, when the runner on third does not commit until seeing that the ball is bunted advantageously, it is called a safety squeeze. Plays involving baserunning: In more recent years, most steals of home involve a delayed double steal, in which a runner on first attempts to steal second, while the runner on third breaks for home as soon as the catcher throws to second base. If it is important to prevent the run from scoring, the catcher may hold on to the ball (conceding the steal of second) or may throw to the pitcher; this may deceive the runner at third and the pitcher may throw back to the catcher for the out. Statistics: In baseball statistics, stolen bases are denoted by "SB". Attempts to steal that result in the baserunner being out are caught stealing ("CS"). The sum of these statistics is steal attempts. Successful steals as a percentage of total steal attempts is called the success rate. The rule on stolen bases states that: Advances that are credited to some other play are not steal attempts. For example, on a wild pitch or a passed ball, the official scorer must notice whether the runner broke for the next base before the pitch got away. Statistics: As usual, statistics in the case of a defensive error are based on error-free play. If a runner would have been out, but for the error, it is scored as "caught stealing, safe on the error." A catcher does not commit an error by throwing poorly to the destination base, but if any runner takes an extra base on the bad throw, it is "stolen base plus error." There is no steal attempt on a dead ball, whether the runner is sent back to the original base (as on a foul ball) or is awarded the next base (as on a hit batsman). On a base award when the ball is live (such as a walk), the runner could make a steal attempt beyond the base awarded. Statistics: Cases where the defense intentionally allows the runner to advance without attempting to put him out are scored as defensive indifference, also called fielder's indifference, and do not count as stolen bases. This is usually only scored late in games when it is clear that the defense's priority is getting the batter out. The lack of a putout attempt does not by itself indicate defensive indifference; the official scorer must also factor in the game situation and the defensive players' actions.Relative skill at stealing bases can be judged by evaluating either a player's total number of steals or the success rate. Noted statistician Bill James has argued that unless a player has a high success rate (67–70% or better), attempting to steal a base is detrimental to a team.Comparing skill against players from other eras is problematic, because the definition has not been constant. Caught stealing was not recorded regularly until the middle of the 20th century. Ty Cobb, for example, was known as a great base-stealer, with 892 steals and a success rate of over 83%. However, the data on Cobb's caught stealing is missing from 12 seasons, strongly suggesting he was unsuccessful many more times than his stats indicate. Carlos Beltrán, with 286 steals, has the highest career success rate of all players with over 300 stolen base attempts, at 88.3%. Evolution of rules and scoring: The first mention of the stolen base as a statistic was in the 1877 scoring rules adopted by the National League, which noted credit toward a player's total bases when a base is stolen. It was not until 1886 that the stolen base appeared as something to be tracked, but was only to "appear in the summary of the game".In 1887, the stolen base was given its own individual statistical column in the box score, and was defined for purposes of scoring: "...every base made after first base has been reached by a base runner, except for those made by reason of or with the aid of a battery error (wild pitch or passed ball), or by batting, balks or by being forced off. In short, shall include all bases made by a clean steal, or through a wild throw or muff of the ball by a fielder who is directly trying to put the base runner out while attempting to steal." The next year, it was clarified that any attempt to steal must be credited to the runner, and that fielders committing errors during this play must also be charged with an error. This rule also clarified that advancement of another base(s) beyond the one being stolen is not credited as a stolen base on the same play, and that an error is charged to the fielder who permitted the extra advancement. There was clarification that a runner is credited with a steal if the attempt began before a battery error. Finally, batters were credited with a stolen base if they were tagged out after over running the base.In 1892, a rule credited runners with stolen bases if a base runner advanced on a fly out, or if they advanced more than one base on any safe hit or attempted out, providing an attempt was made by the defense to put the runner out. The rule was rescinded in 1897.In 1898, stolen base scoring was narrowed to no longer include advancement in the event of a fielding error, or advancement caused by a hit batsman.1904 saw an attempt to reduce the already wordy slew of rules governing stolen bases, with the stolen base now credited when "the baserunner [sic] advances a base unaided by a base hit, a put out, (or) a fielding or batter error."1910 saw the first addressing of the double and triple steal attempts. Under the new rule, when any runner is thrown out, and the other(s) are successful, the successful runners will not be credited with a stolen base.Without using the term, 1920 saw the first rule that would be referred to today as defensive indifference, as stolen bases would not be credited, unless an effort was made to stop the runner by the defense. This is usually called if such is attempted in the ninth inning while that player's team is trailing, unless the runner represents the potential tying run.1931 saw a further narrowing of the criteria for awarding a stolen base. Power was given to the official scorer, in the event of a muff by the catcher in throwing, that in the judgment of the scorer the runner would have been out, to credit the catcher with an error, and not credit the runner with a stolen base. Further, any successful steal on a play resulting in a wild pitch, passed ball, or balk would no longer be credited as a steal, even if the runner had started to steal before the play.One of the largest rewrites to the rules in history came in 1950. The stolen base was specifically to be credited "to a runner whenever he advances one base unaided by a base hit, a putout, a forceout, a fielder's choice, a passed ball, a wild pitch, or a balk."There were noted exceptions, such as denying a stolen base to an otherwise successful steal as a part of a double or triple steal, if one other runner was thrown out in the process. A stolen base would be awarded to runners who successfully stole second base as a part of a double steal with a man on third, if the other runner failed to steal home, but instead was able to return safely to third base. Runners who are tagged out oversliding the base after an otherwise successful steal would not be credited with a stolen base. Indifference was also credited as an exception. Runners would now be credited with stolen bases if they had begun the act of stealing, and the resulting pitch was wild, or a passed ball. Finally, for 1950 only, runners would be credited with a stolen base if they were "well advanced" toward the base they were attempting to steal, and the pitcher is charged with a balk, with the further exception of a player attempting to steal, who would otherwise have been forced to advance on the balk by a runner behind them. This rule was removed in 1951.A clarification came in 1955 that awarded a stolen base to a runner even if he became involved in a rundown, provided he evaded the rundown and advanced to the base he intended to steal.The criteria for "caught stealing" were fine-tuned in 1979, with a runner being charged with being caught if he is put out while trying to steal, overslides a base (otherwise successfully stolen), or is picked off a base and tries to advance to the next base. It is explicitly not caught stealing to be put out after a wild pitch or passed ball. "Stealing first": While not recorded as a stolen base, the same dynamic between batter/runner and defense is on display in the case of an uncaught third strike. The batter/runner can avoid an out and become a baserunner by reaching first base ahead of the throw. This case is a strikeout that is not an out; the batter/runner's acquisition of first base is scored as a passed ball, a wild pitch, or an error.In baseball's earlier decades, a runner on second base could "steal" first base, perhaps with the intention of drawing a throw that might allow a runner on third to score (a tactic famously employed by Germany Schaefer). However, such a tactic was not recorded as a stolen base. MLB rules now forbid running clockwise on the basepaths to "confuse the defense or make a travesty of the game". Further, after the pitcher assumes the pitching position, runners cannot return to any previous base.In a game on August 16, 1987, Toronto Blue Jays center fielder Lloyd Moseby successfully stole second base on a throwing error by Chicago White Sox catcher Carlton Fisk that went well into center field. However, shortstop Ozzie Guillen faked as if the batter had hit a popfly, which would have required Moseby to return to first base to avoid getting doubled off. Moseby made it back to first base, but another throwing error sent the ball to the infield wall, giving Moseby another chance to steal second, which he did. This chaos led the announcer to say, "He doesn't know where the throw is; he's going back to first base! Is he going to steal first? He steals first! Now he's going to steal second again! I've never seen it before!" This bizarre play was officially scored as a baserunner advancing on a throwing error by the center fielder, ironically resulting in neither a stolen base awarded nor an error charged to the catcher.In a game on April 19, 2013, Milwaukee Brewers shortstop Jean Segura stole second base in the bottom of the eighth inning. After the batter up, Ryan Braun, walked, Segura broke early for third base and the pitcher, Shawn Camp of the Chicago Cubs, threw ahead of him. As Segura was chased back to second base, Braun advanced to second as well and was tagged out. Segura, thinking he was out, began to return to the home dugout behind first base, but first base coach Garth Iorg directed him to stand at first. Segura had not intentionally run the bases backwards as a deception or mockery, but no fielder tried to tag him out. Later in the inning, he attempted to steal second for the second time, but was thrown out by catcher Welington Castillo.The expression "You can't steal first base" is sometimes used in reference to a player who is fast but not very good at getting on base in the first place. Former Pittsburgh Pirates and Seattle Mariners manager Lloyd McClendon is jokingly referred to as having "stolen first" in a June 26, 2001 game as the manager of the Pirates: after being ejected for disputing a call at first base, he yanked the base out of the ground and left the field with it, delaying the game. Of the incident, McClendon said "I told him he wasn't using it, so I thought I'd take it." When a groundskeeper came out to replace the bag, the crowd booed him.The independent Atlantic League instituted a new rule for the second half of the 2019 season, allowing batters to become runners on any pitch not "caught in flight" by the catcher, as they can throughout baseball after most uncaught third strikes. On July 13, 2019, outfielder Tony Thomas of the Southern Maryland Blue Crabs became the first player to reach first base under this rule. The press described this as "stealing first base", though it is scored as described above.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Block (programming)** Block (programming): In computer programming, a block or code block or block of code is a lexical structure of source code which is grouped together. Blocks consist of one or more declarations and statements. A programming language that permits the creation of blocks, including blocks nested within other blocks, is called a block-structured programming language. Blocks are fundamental to structured programming, where control structures are formed from blocks. Block (programming): Blocks have two functions: to group statements so that they can be treated as one statement, and to define scopes for names to distinguish them from the same name used elsewhere. In a block-structured programming language, the objects named in outer blocks are visible inside inner blocks, unless they are masked by an object declared with the same name. History: Ideas of block structure were developed in the 1950s during the development of the first autocodes, and were formalized in the Algol 58 and Algol 60 reports. Algol 58 introduced the notion of the "compound statement", which was related solely to control flow. The subsequent Revised Report which described the syntax and semantics of Algol 60 introduced the notion of a block and block scope, with a block consisting of " A sequence of declarations followed by a sequence of statements and enclosed between begin and end..." in which "[e]very declaration appears in a block in this way and is valid only for that block." Syntax: Blocks use different syntax in different languages. Two broad families are: the ALGOL family in which blocks are delimited by the keywords "begin" and "end" or equivalent. In C, blocks are delimited by curly braces - "{" and "}". ALGOL 68 uses parentheses. Syntax: Parentheses - "(" and ")", are used in the MS-DOS batch language indentation, as in Python s-expressions with a syntactic keyword such as prog or let (as in the Lisp family) In 1968 (with ALGOL 68), then in Edsger W. Dijkstra's 1974 Guarded Command Language the conditional and iterative code block are alternatively terminated with the block reserved word reversed: e.g. if ~ then ~ elif ~ else ~ fi, case ~ in ~ out ~ esac and for ~ while ~ do ~ od Limitations: Some languages which support blocks with declarations do not fully support all declarations; for instance many C-derived languages do not permit a function definition within a block (nested functions). And unlike its ancestor Algol, Pascal does not support the use of blocks with their own declarations inside the begin and end of an existing block, only compound statements enabling sequences of statements to be grouped together in if, while, repeat and other control statements. Basic semantics: The semantic meaning of a block is twofold. Firstly, it provides the programmer with a way for creating arbitrarily large and complex structures that can be treated as units. Secondly, it enables the programmer to limit the scope of variables and sometimes other objects that have been declared. In early languages such as Fortran IV and BASIC, there were no statement blocks or control structures. Conditionals were implemented using conditional goto statements: The logical structure of the program is not reflected in the language, and analyzing when a given statement is executed can be difficult. Basic semantics: Blocks allow the programmer to treat a group of statements as a unit, and the default values which had to appear in initialization in this style of programming can, with a block structure, be placed closer to the decision: Use of blocks in the above fragment of Pascal clarifies the programmer's intent, and enables combining the resulting blocks into a nested hierarchy of conditional statements. The structure of the code reflects the programmer's thinking more closely, making it easier to understand and modify. Basic semantics: The above source code can be made even clearer by taking the inner if statement out of the outer one altogether, placing the two blocks one after the other to be executed consecutively. Semantically there is little difference in this case, and the use of block structure, supported by indenting for readability, makes it easy for the programmer to refactor the code. Basic semantics: In primitive languages, variables had broad scope. For instance, an integer variable called IEMPNO might be used in one part of a Fortran subroutine to denote an employee social security number (ssn), but during maintenance work on the same subroutine, a programmer might accidentally use the same variable, IEMPNO, for a different purpose, and this could result in a bug that was difficult to trace. Block structure makes it easier for programmers to control scope to a minute level. Basic semantics: In the above Scheme fragment, empno is used to identify both the manager and their underlings each by their respective ssn, but because the underling ssn is declared within an inner block it does not interact with the variable of the same name that contains the manager's ssn. In practice, considerations of clarity would probably lead the programmer to choose distinct variable names, but they have the choice and it is more difficult to introduce a bug inadvertently. Hoisting: In some languages, a variable can be declared at function scope even within enclosed blocks. For example, in JavaScript, variables declared with var have function scope.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trimethylsilyl cyanide** Trimethylsilyl cyanide: Trimethylsilyl cyanide is the chemical compound with the formula (CH3)3SiCN. This volatile liquid consists of a cyanide group, that is CN, attached to a trimethylsilyl group. The molecule is used in organic synthesis as the equivalent of hydrogen cyanide. It is prepared by the reaction of lithium cyanide and trimethylsilyl chloride: LiCN + (CH3)3SiCl → (CH3)3SiCN + LiCl Structure: The molecule exhibits the structure of a nitrile-like compound. The compound exists in a rapid equilibrium with a small amount of the isomeric isocyanide (CH3)3SiNC. By contrast, the nearly isostructural tert-butyl nitrile does not readily isomerize to tert-butyl isocyanide. The isocyanide isomer can be stabilized by complexation to metals. Reactions: Trimethylsilyl cyanide hydrolyzes to give hydrogen cyanide and trimethylsilanol: (CH3)3SiCN + H2O → (CH3)3SiOH + HCNIn its principal application, it adds across carbon-oxygen double bonds, for example in an aldehyde, to form a new carbon-carbon bond: RCH=O + (CH3)3SiC≡N → N≡C–CHR–OSi(CH3)3The product is an O-silylated cyanohydrin. One use of this reagent is to convert pyridine-N-oxides into 2-cyanopyridine. This transformation is best done in dichloromethane solution using dimethylcarbamoyl chloride as the activating electrophile. It is possible to use benzoyl chloride but the yields and regioselectivity of the addition of the cyano group are lower. Acetone cyanohydrin can be used to reversibly generate the cyanide anion. (4) Safety: Trimethylsilyl cyanide behaves equivalently to hydrogen cyanide, a potent poison. The compound can be disposed of by using a mixture of alkali hydroxide and bleach.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rigorous Approach to Industrial Software Engineering** Rigorous Approach to Industrial Software Engineering: RAISE (Rigorous Approach to Industrial Software Engineering) was developed as part of the European ESPRIT II LaCoS project in the 1990s, led by Dines Bjørner. It consists of a set of tools designed for a specification language (RSL) for software development. It is especially espoused by UNU-IIST in Macau, who run training courses on site and around the world, especially in developing countries.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anthony Freda** Anthony Freda: Anthony Freda is an American illustrator and painter of commercial art. Freda's paintings are an amalgamation of vintage found objects, including scraps taken from antique rulers, aging books, bits of metal, old barn wood, and forgotten souvenirs, combined with detailed drawings and paintings that may be a mix of handwork with some computer manipulation. His work regularly appears in Communication Arts, American Illustration, and most recently in a book titled, "The 200 Best Illustrators Worldwide," published by Luerzer's Archive. Anthony Freda: Freda's work also featured prominently throughout well-known animal rights activist Karen Dawn's 2008 book, titled "Thanking the Monkey: Rethinking the Way We Treat Animals," published by HarperCollins.Freda's work has been featured in national ad campaigns for companies such as Converse, Mini Cooper and the Rockport Shoe Company. In 2006, Freda served as a judge for The Society of Illustrator's annual competition held in New York City. Additionally, his work has been published in several volumes of the society's annual publication, which showcases the best of American illustrations. Anthony Freda: In 2006, The Village Voice commissioned Freda to illustrate a story about people who challenge the official 9/11 narrative; the artwork has since become part of the permanent collection of the US National September 11 Museum and Memorial in New York, NY. An interview was conducted by the museum's curators as part of the acceptance process and the meeting was documented by filmmaker John Massaria.Freda is a freelance contributor to The Nation and Adbusters Magazines. Anthony Freda: In 2017, Freda's piece " Don't Tase Me, Bro." was selected to be a part of the international juried competition "Delusional" at Jonathan LeVine Projects in New Jersey.Freda currently teaches illustration as an adjunct professor at Fashion Institute of Technology in New York, NY, and is a curator and owner of Star Gallery NYC.Though known primarily for his widely published political artwork, Freda is moving away from this genre to focus on teaching, curating and exploring more personal artistic endeavors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ORC3** ORC3: Origin recognition complex subunit 3 is a protein that in humans is encoded by the ORC3 (ORC3L) gene. Function: The origin recognition complex (ORC) is a highly conserved six subunits protein complex essential for the initiation of the DNA replication in eukaryotic cells. Studies in yeast demonstrated that ORC binds specifically to origins of replication and serves as a platform for the assembly of additional initiation factors such as Cdc6 and Mcm proteins. The protein encoded by this gene is a subunit of the ORC complex. Studies of a similar gene in Drosophila suggested a possible role of this protein in neuronal proliferation and olfactory memory. Alternatively spliced transcript variants encoding distinct isoforms have been reported for this gene. Interactions: ORC3 has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Add-on (Mozilla)** Add-on (Mozilla): Add-on is the Mozilla term for software modules that can be added to the Firefox web browser and related applications. Mozilla hosts them on its official add-on website.Browser extensions are the primary type of add-on. In 2017, Mozilla enacted major changes to the application programming interface (API) for extensions in Firefox, replacing the long-standing XUL and XPCOM APIs with the WebExtensions API that is modeled after Google Chrome's API. Thus add-ons that remain compatible with Firefox are now largely compatible with Chrome as well. As of December, 2022, there are close to 30,000 add-ons and over 480,000 themes available for Firefox. Current add-ons: Extensions WebExtensions Starting with Firefox 57, only the new WebExtensions API is supported. Themes Early versions of Firefox supported themes that could greatly change the appearance of the browser, but this was scaled back over time. Current themes are limited to changing the background and text color of toolbars. (These lightweight themes were formerly called personas.) Historical add-ons: Extensions Legacy extensions Prior to 2017, Firefox supported extensions developed with different APIs: XUL, XPCOM, and Jetpack. Mozilla now refers to these as legacy extensions. Plug-ins Plug-ins are no longer supported in Firefox. In the past, they were used to handle media types for which the application did not have built-in capability. They were deprecated due to security concerns and improvements in Web APIs. The last one that was officially supported was Adobe Flash Player, which Adobe discontinued in 2020. Restrictions: Mozilla had no mechanism to restrict the privileges of legacy Firefox extensions. This meant that a legacy extension could read or modify the data used by another extension or any file accessible to the user running Mozilla applications. But the current WebExtensions API imposes many restrictions.Starting with Firefox 40, Mozilla began to roll out a requirement for extension signing. It is now required in all official Firefox releases. Website: The Mozilla add-ons website is the official repository for Firefox add-ons. In contrast to mozdev.org which provides free hosting for Mozilla-related projects, the add-ons site is tailored for users. By default, Firefox automatically checks the site for updates to installed add-ons.In January 2008, Mozilla announced that the site had accumulated a total of 600 million add-on downloads and that over 100 million installed add-ons automatically check the site for updates every day. In July 2012, the total had increased to 3 billion downloads from the site.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ISO 6438** ISO 6438: ISO 6438:1983, Documentation — African coded character set for bibliographic information interchange, is an ISO standard for an 8-bit character encoding for African languages. Developed separately from the African reference alphabet but apparently based on the same data sets, it has had little use; its forms are retained Unicode. Character set: ^Χ Prior to Unicode 7.0, F5 mapped to U+03C7 χ GREEK SMALL LETTER CHI.Prior to Unicode 8.0, E5 mapped to U+03A7 Χ GREEK CAPITAL LETTER CHI.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Folliculostellate cell** Folliculostellate cell: A Folliculostellate (FS) cell is a type of non-endocrine cell found in the anterior lobe of the pituitary gland. Histology and ultrastructure: Rinehart and Farquhar first discovered FS cells through electron microscopy of the anterior pituitary gland. Vila-Porcile named these non-endocrine cells "folliculo-stellate" cells in 1972 due to their stellate (star) shape, and their location lining the lumen of small follicules in the anterior pituitary. Unlike the majority of cells in the anterior pituitary, they are non-endocrine and agranular. They have long cytoplasmic processes which interlock to form a mesh, within which the endocrine cells reside. They typically have a large number of microvilli on their apical side, and contain lysosomes, suggesting phagocytotic activity. Gap junctions can be seen between the FS cells and the adjacent endocrine cells when viewed under an electron microscope. Cell properties: Using pituitary slices, studies have been conducted that have illustrated that FS cells are arranged into 3D networks which are able to communicate intracellularly through gap junction-mediated calcium wave propagation. Experiments using two main FS cell lines (TtT/GF and Tpit/F1), have hugely improved our knowledge of the functional relevance of these cells- it has been shown that FS cells play a part in three areas of pituitary use: autocrine/paracrine control of anterior pituitary cell function though the use of cytokines and growth factors, intrapituitary communication among various cell types, and modulation of inflammatory response feedback.FS cells have similar properties to dendritic cells and macrophages, implying a phagocytic role. FS cells having a main role in moderating the neuro-immune/endocrine regulation of inflammation is backed up by data in conjunction with depicting C3a, C5a receptors (which are the main factors of the innate immune system), secreting IL-6 and MIF (inflammatory cytokines), and controlling the release of these cytokines via anti-inflammatory molecules. Experiments have been carried out to assess the protein markers they express, in order to determine their cell-type and thus exact function in the pituitary. The first marker protein discovered in FS cells was S-100b, which is a calcium-binding protein expressed by glial cells. Some populations of FS cells have also been found to express different cell markers, including GFAP (glial fibrillary acidic protein), cytokeratin, vimentin and fibronectin. S-100 protein and GFAP expression seem to be strongest in early, newly formed FS cells, thus could be important in early FS cell development. GFAP expression implies these cells could be of a neuroectodermal origin, whereas keratin-positive FS cells express epithelial-like characteristics. The study of fibronectin expression in these cells suggests that FS cells may help regulate pituitary function, by interacting with hormone secreting cells through fibronectin. Furthermore, as FS cells express vimentin, an intermediate filament protein marker, this supports the theory that FS cells may be derived from glial neuroectodermic cells.Due to the different array of markers expressed in these cells, it is difficult to specify their exact cell-type and function. Newer findings propose that pituitary FS cells are made up of groups of cells with disparate immunophenotypes and are not a homogeneous population; however, it still isn't clear if these groups of cells are actually different or are simply cells at varying stages in their development. Multiple FS cell lines have been developed to try to observe the location and function of these cells. mRNA levels of FS cells has been investigated via laser capture microdissection and RT-PCR, so progress is being made in terms of understanding the expression and function of these non-endocrine cells of the pituitary. As they have multiple markers, it is plausible that these cells are a hybrid of several different cell types. Gap junctions between endocrine cells and FS cells: Although FS cells do not secrete hormones, they influence the functionality of hormone-secreting endocrine cells via gap junctions. FS cells form homologous gap junctions with their adjacent counterparts, but also heterologous gap junctions with hormone-secreting endocrine cells. The gap junctions that exist between adjacent FS cells are used to propagate calcium-mediated signals throughout the pituitary to coordinate the function of excitable endocrine cells distributed throughout the gland. The endocrine-FS cell gap junctions, alongside the FS-FS gap junctions form a cell network that allows information about the physiological environment to be transferred around the pituitary to coordinate its secretory function.Studies in various small mammals have demonstrated that the number of gap junctions is influenced by several factors, such as puberty, the menstrual cycle and lactation. In the mink, the presence of the connexin-43 protein that is functional in gap junctions, correlates to prolactin secretory demand depending on the breeding season. When prolactin secretion is highest in the spring there is the highest abundance of connexin-43 gap junctions; prolactin secretion and gap junctions are lowest in the winter. Thus demonstrating that the FS-cell network has a role in influencing prolactin secretion. This is consistent with studies in rats which found that gap junctions increased during lactation to facilitate prolactin demand. Additional studies in rats found that the number of gap junctions increases with anterior pituitary maturation, and this increase was prevented by castration in male rats which would prevent sexual maturation, and was restored to normal levels by hormone treatment. Similarly, gap junctions increase during pro-oestrus and oestrus phases of the oestrous cycle, and are decreased by fifty percent during di-oestrus. Evidently, the number of gap junctions is influenced by steroid hormone secretion from the gonads, and FS cells contribute to the pituitary-gonadal feedback loop. Function as sustentacular cells: Folliculostellate (FS) cells are asserted to be of sustentacular (support) function due to their positioning alongside the endocrine (hormone-secreting) cells of the pituitary gland, implying an either mechanical or chemical support – by forming structural support around the endocrine cells or releasing growth factors and cytokines (cell-signalling molecules. Structural support is exemplified in that FS cells are known to produce Metalloprotease inhibitor which may protect the basement membrane and maintain three-dimensional structural support; as well as surrounding endocrine cells, forming close contact to provide the growth factors and cytokines, within the pituitary gland. Role as signalling mediators for pituitary endocrine cells: Nitric oxide FS Cells are thought to have a role in relaying signals to the hormone secreting endocrine cells of the pituitary gland. Nitric Oxide (NO), is reported to be a key modulator of endocrine cell function and has been shown that FS cells (and some endocrine cells) contain neuronal NO synthase, a key NO production enzyme which is responsible for the production of NO from L-arginine. It is thought that FS cells modulate NO production in adjacent endocrine cells via paracrine mechanisms. Role as signalling mediators for pituitary endocrine cells: Interferon-gamma Interferon-gamma is a cytokine that acts to inhibit the release of various hormones from the anterior pituitary, FS cells are thought to be vital in mediating this process. This facilitating role of FS cells was identified when studying the anterior pituitary glands of rats, as anterior pituitary samples with few FS cells failed to exhibit the usual inhibitory effects of interferon-gamma. Role as signalling mediators for pituitary endocrine cells: Glucocorticoids Glucocorticoid induced suppression of the hypothalamic-pituitary-adrenal (HPA) axis has 2 components. Firstly, within 15 minutes of increased glucocorticoid exposure in the anterior pituitary, there is a reduction in the release of preformed adrenocorticotrophic hormone (ACTH). Secondly, glucocorticoids act at a genomic level by suppressing the translation of ACTH and CRH: this process takes 2 hours after exposure to increased glucocorticoids.The protein Annexin A1 (ANXA1), found in high quantities in the anterior pituitary gland, is located specifically in the folliculostellate cell. In addition to the anterior pituitary gland, it can also be found in the non-endocrine cells of the hypothalamus. Glucocorticoids act on the folliculostellate cells to increase synthesis of ANXA1 and then stimulate its translocation to the cell surface of the FS cell. This translocation is dependent on protein kinase C. ANXA1 subsequently acts on the corticotrophs of the anterior pituitary, which express ANXA1 G protein coupled receptors, via a paracrine mechanism. The downstream signalling pathway which culminates in reduced ACTH synthesis and/or release remains largely unexplored and as consequence remains poorly understood.The glucocorticoid/folliculostellate cell relationship also has a role in the production of the excitatory neurotransmitter glutamine. Cells in rat anterior pituitary gland which contain large quantities of the enzyme glutamine synthetase also express the S100 protein which is the marker for folliculostellate cells. After exogenous glucocorticoid administration, the number of these cells increases and the activity of glutamine synthetase also increases. This enzyme is necessary as it allows the CNS to produce glutamine internally. This is essential as the quantity of glutamine transported from the peripheral blood to the CNS cannot satisfy the demands of the CNS for glutamine. Role as signalling mediators for pituitary endocrine cells: Interleukin-6 The production of the cytokine interleukin-6 (IL-6) could also be said to be a supportive function, as the IL-6 is a mediator in communication between the endocrine and immune system. IL-6 production by FS cells induces hormone production from endocrine cells, which can then activate the immune system. Potential function as stem cells: There has been some suggesting evidence through numerous studies that FS cells may act as pituitary stem cells (SC). Indirect evidence from goat as well as rat cells has led to suggestions that the S100β+ cells may act as intermediate cells during the formation of adult pituitary cells. Nonetheless, more research needs to be done to clarify the potential stem cell properties of FS cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EdtFTPj** EdtFTPj: edtFTPj is an open-source FTP client library for use in Java applications licensed under the LGPL. It was first released in 2000, and was originally known as the Java FTP Client Library. It is supplied as a JAR file and can be used in any Java application that requires FTP functionality. edtFTPj provides FTP capabilities for popular software packages such as Jalbum and Cyberduck. edtFTPj is also known as edtFTPj/Free. There is also a commercial version known as edtFTPj/PRO, which includes the following additional features: FTPS (explicit and implicit modes), SFTP and SCP (secure copy), multiple protocols supported in the one component, simultaneous transfers by use of FTP connection pools, directory transfers and directory synchronization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transesophageal echocardiogram** Transesophageal echocardiogram: A transesophageal echocardiogram, or TEE (TOE in the United Kingdom and other countries such as Australia and New Zealand, reflecting the British English spelling transoesophageal), is an alternative way to perform an echocardiogram. A specialized probe containing an ultrasound transducer at its tip is passed into the patient's esophagus. This allows image and Doppler evaluation which can be recorded. It is commonly used during cardiac surgery and is an excellent modality for assessing the aorta, although there are some limitations.It has several advantages and some disadvantages compared with a transthoracic echocardiogram (TTE). Details: TEE is a semi-invasive procedure in that the probe must enter the body but does not require surgical (i.e., invasive) cutting for this procedure. Before inserting the probe, mild to moderate sedation is induced in the patient to ease the discomfort and to decrease the gag reflex. Usually a local anesthetic spray (e.g., lidocaine, benzocaine, xylocaine) is used for the back of the throat or as a jelly/lubricant anesthetic for the esophagus. Sedation and anesthesia are required to make the procedure tolerable and safer, as biting the probe, coughing, vomiting, and patient movement would drastically reduce the value of the procedure.Mild or moderate sedation can be induced with medications such as midazolam (a benzodiazepine with sedating, amnesiac qualities), fentanyl (an opioid), or propofol (a sedative/general anesthetic, depending on dosage). Children are anesthetized. Adults are sometimes anesthetized as well if moderate sedation is unsuccessful.Due to the procedure being invasive, sonographers do not perform this procedure unlike transthoracic echo. Once adequate sedation and anesthesia are achieved, the probe is passed through the mouth and into the esophagus. From here, the protocol used for the procedure is highly variable. Details: As the study could be terminated any second (e.g., respiratory compromise, hypotension, intolerance to the probe) the structures of particular interest could be visualized first. For example, if the TEE is ordered to look for mitral regurgitation then the mitral valve may be fully inspected first. At the completion of the study, the probe is removed and patient is monitored for recovery from sedation. Details: Advantages The advantage of TEE over TTE is usually clearer images, especially of structures that are difficult to view transthoracically (through the chest wall). This difficulty with TTE is exemplified with obesity and COPD, as both of these can drastically limit both the window available and the quality of the images obtained through those windows This reduces the attenuation (weakening) of the ultrasound signal, generating a stronger return signal, ultimately enhancing image and Doppler quality. Comparatively, transthoracic ultrasound must first traverse skin, fat, ribs and lungs before reflecting off the heart and back to the probe before an image can be created. All these structures, along with the increased distance the beam must travel, weaken the ultrasound signal thus degrading the image and Doppler quality.In adults, several structures can be evaluated and imaged better with the TEE, including the aorta, pulmonary artery, valves of the heart, both atria, atrial septum, left atrial appendage, and coronary arteries. Details: TEE has a very high sensitivity for locating a blood clot inside the left atrium.TEE is also frequently used concurrently with cardiac surgery to provide immediate visualization, inspection, and monitoring of the patient throughout the procedure. Its intraoperative utility includes real-time hemodynamic monitoring by the cardiac anesthesiologist, evaluation of relevant cardiac pathologies before and after surgical repair, and immediate assessment of the success of surgical interventions after cardiopulmonary bypass. TEE can also evaluate for unintended complications from surgery, for example unintended injury to cardiac valves, the aorta, or other structures during the procedure. (Ref: https://www.asecho.org/wp-content/uploads/2014/05/2013_Performing-Comprehensive-TEE.pdf) Disadvantages TEE has several disadvantages, although they should be weighed against its significant benefits. The patient must follow the ASA NPO guidelines (usually not eat anything for eight hours and not drink anything for two hours prior to the procedure). Rather than one sonographer, a TEE needs a team of medical personnel of at least one nurse to monitor/administer sedation and a physician to perform the procedure (a third physician/sonographer can be used to push buttons on the ultrasound machine). It takes longer to perform a TEE than a TTE. It may be uncomfortable for the patient, who may require general anesthesia at the extreme to perform a TEE safely. Due to being an invasive procedure requiring sedation, it is more technically difficult to perform and requires experience to do it well while maintaining safety. Details: TEE is limited to available anatomy. For example, if the patient has esophageal varices, esophageal stricture, Barrett's esophagus, or other esophageal or stomach problems then this can increase the risk of a TEE significantly. Performing an esophagogastroduodenoscopy (EGD) beforehand may be necessary to visualize the anatomy for safety, which exposes the patient to a second procedure. The anatomy may result in prohibitive risk. With transthoracic echo, numerous measurements are taken to aid in diagnosis and grading of diseases. These normal ranges are not as well defined for TEE and so there is less accepted standards (e.g., left atrial enlargement). Some risks are associated with the procedure, such as esophageal perforation around 1 in 10,000, and adverse reactions to the medication. Details: Specialty medicine professional organizations recommend against using transesophageal echocardiography to detect cardiac sources of embolization after a patient's health care provider has identified a source of embolization and if that person would not change a patient's management as a result of getting more information. Such organizations further recommend that doctors and patients should avoid seeking transesophageal echocardiography only for the sake of protocol-driven testing and to agree to the test only if it is right for the individual patient. Clinical uses: In addition to use by cardiologists in outpatient and inpatient settings, TEE can be performed by a cardiac anesthesiologist to evaluate, diagnose, and treat patients in the perioperative period. Most commonly used during open heart procedures, if the patient's status warrants it, TEE can be used in the setting of any operation. Clinical uses: TEE is very useful during many cardiac surgical procedures (e.g., mitral valve repair). It is actually an essential monitoring tool during this procedure. It helps to detect and quantify the disease preoperatively as well as to assess the results of surgery immediately after the procedure. If the repair is found to be inadequate, showing significant residual regurgitation, the surgeon can decide whether to go back to cardiopulmonary bypass to try to correct the defect. Clinical uses: Aortic dissections are another important condition where TEE is very helpful. TEE can also help the surgeon during the insertion of a catheter for retrograde cardioplegia. Probes: TEE probes are similar in style to those used for esophagogastroduodenoscopy except the probe contains an ultrasound crystal rather than a visual camera. The ultrasound crystal images radially to the probe rather than axially (along the probe length) as the heart is not inline with the esophagus, but rather adjacent (anterior) to it. Angle Most TEE probes contain a two-dimensional ultrasound crystal. This permits rotation of the 2-D echo plane without physical movement of the probe. This is often referred to the "angle" and varies between 0° and 180° (flipped image of 0°). For any given position of the probe in the body, different angles permit viewing structures more optimally. The angle can be adjusted with buttons or a dial, and this varies with the specific probe and ultrasound machine. Movement The probes often have one or two degrees of freedom: Flexion or retroflexion can point the crystal superiorly or inferiorly, respectively Left and right flexion tilts the probe left and rightThese two degrees are typically adjusted using dials on the handle of the probe. A third degree is axial rotation of the probe (clockwise or counter-clockwise) and is present regardless of the other two degrees of freedom. A fourth degree is the translation of the probe long its axis to permit passing through the mouth, into the esophagus, and into the stomach. The combination of these four degrees of freedom permit 2-D, color, and Doppler echo of practically every structure in the heart. Positions: Transthoracic echo is far more commonly used than TEE and transthoracic echo is limited to by the available windows through the chest wall to visualize the heart. TEE does not have such discrete locations and can visualize the heart anywhere along the esophagus to the stomach. With that said, there are commonly accepted positions along this path that are used when performing a standard TEE. Midesophageal The midesophageal view is positioned posterior from the left atrium and at 0° this provides for a long-axis four chamber view. At 0°, the long-axis four chamber view can be obtained with slight retroflexion of the probe. However, slight rotation and insertion may be needed to better visualize the right heart and tricuspid valve. At 45°, the short-axis view of the aortic valve can be obtained. At this angle, a short-axis view of the right ventricle can be seen to visualize the right atrium, tricuspid valve, right ventricle, and pulmonary valve in a single view. At 90°, the probe can be rotated clockwise to obtain the "bicaval view" in which the right atrial and both the inferior and superior vena cava can be viewed. At 135°, the long-axis view of the aortic valve can be obtained. The left atrial appendage, with proper probe positioning, can be visualized at all angles and often visualized at 0*, 45°, 90°, and 135° to adequately rule out a thrombus. Transgastric Pushing the TEE probe past the gastroesophageal junction into the stomach and flexing the probe (pointing it toward the superior) yields a short-axis view of the heart. At 0°, the short-axis of the left ventricle can be obtained to see wall motion in the basal, mid, and distal sections. If the probe is rotated clockwise, then the right heart and tricuspid valve can be visualized. It is in the transgastric position that is best used to quantify the aortic valve with pulse- and continuous-wave Doppler as this is the best view to be best coaxial with the valve. Positions: Upper esophagus Pulling back of the TEE probe higher into the esophagus reveals the aortic arch. Typically, in the midesophageal view the probe is rotated until the descending aorta is visualized. Pulling back the probe permits visualization of the aorta and any atheromatous plaques within the aorta. Short axis visualization at 0° allows for descending aorta size measurements. Further pulling back will eventually reach the aorta arch and clockwise rotation will bring the arch into view. Continuous visualization of the aorta to the arch level can visualize coarctation of the aorta. History: The transesophageal echocardiogram was first invented by Dr. Leon Frazin in 1974 while working at the Loyola University Stritch School of Medicine, Maywood, and Veterans Administration Hospital, Hines, Illinois. His early findings were published in 1976 in Circulation Diseases: While TEE can be used to answer many questions that a transthoracic echo can answer, the TEE is used for some diseases in particular. Diseases: Infective endocarditis to get better quality images of the affected valve and better plan surgery, or need for surgery Aortic root abscess, which generally is not visible on transthoracic echo Eccentric mitral regurgitation can be better appreciated on TEE due to Coandă effect Left atrial appendage thrombus and evaluation, follow up, and insertion of a left atrial appendage occlusion device Evaluation for patent foramen ovale and atrial septal defect after a stroke, and insertion of a PFO/ASD plug Monitoring during a procedure to cross the interatrial septum safely without poking the needle through an undesired structure During cardiothoracic surgery for numerous procedures including immediately before and after replacement of a valve
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neubot** Neubot: Neubot (the network neutrality bot) is a free software Internet bot, developed and maintained by the Nexa Center for Internet and Society, that gathers network performance data useful to investigate network neutrality. Description: Once installed on the user's computer, it runs in the background and periodically performs active transmission tests with servers hosted by the distributed Measurement Lab server platform (and, in future, with other instances of the software itself). These transmission tests measure end-to-end network performance emulating different protocols (currently HTTP and BitTorrent) as well as transmitting and receiving "raw" data over TCP. Performance are measured at application level as well as at TCP level (using Web100). Description: Measurements results are saved both locally (where a localhost-only web user interface allows users to browse them ) and on Measurement Lab servers. They are collected for research purposes and automatically published on the web under Creative Commons Zero (public domain) allowing anyone to re-use them freely for the same purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Master printmaker** Master printmaker: Master printmakers or master printers are specialized technicians who hand-print editions of works of an artist in printmaking. Master printmakers often own and/or operate their own printmaking studio or print shop. Business activities of a Master printshop may include: publishing and printing services, educational workshops or classes, mentorship of artists, and artist residencies. The role of the specialist printers mostly emerged from the 18th century onwards. Previously artists in printmaking mostly printed their own prints, as for example Rembrandt did; he had a printing press for etchings and engravings in his house. For woodcuts the blockcutter had long been a specialist artisan, sometimes famous. Printing of lithographs from the 19th century on has normally been a specialist process. Training for master printmakers varies by technique, geography, and culture. Master printmakers are almost always trained by other master printmakers. The Tamarind Institute is one formal institution mandated to train master lithographers, located in New Mexico. In the 20th century in Britain there was a federation of master printers called the British Printing Industries Federation, renamed the British Federation of Master Printers (BFMP) in the 1930s and then again renamed the British Printing Industries Federation in the 1970s. Notable people: Contemporary, mostly Americans Historical master printmakers, mostly American
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eimer's organ** Eimer's organ: Eimer's organs are sensory organs in which the epidermis is modified to form bulbous papillae. First isolated by Theodor Eimer from the European mole in 1871, these organs are present in many moles, and are particularly common in the star-nosed mole, which bears 25,000 of them on its unique tentacled snout. The organs are formed from a stack of epidermal cells, which is innervated by nerve processes from myelinated fibers in the dermis, which form terminal swellings just below the outer keratinized layer of epidermis. They contain a Merkel cell-neurite complex in the epidermis and a lamellated corpuscle in the dermal connective tissue. Discovery: Theodor Eimer described the discrete microscopic organ of touch that densely populates the tip of the nose of the European mole Talpa europaea. The organ is named in his honour. In his original publication in 1871, he examined the structure of the nose, the distribution of the touch organs on the nasal skin, and the relationship of their density with the nose's use for palpation, to examine or explore by touching. Eimer established a connection between structure and function. Discovery: Eimer recognized the importance of the mole's nose to its behaviour. He stated in 1871: "The mole's snout must be the seat of an extraordinarily well developed sense of touch because it replaces almost entirely the animal's sense of face, constituting its only guide on its paths underground." He estimated that the nose of the European mole was covered with more than 5,000 Eimer's organs, which were invested with 105,000 nerve fibres. He took the abundance of sensory innervation (stimulate a nerve or muscle) to affirm his contention that the nose's touch must represent the moles dominant facial sense. Eimer asserted that his interpretation was consistent with the common knowledge of his time. In his publication he noted that the extreme density of highly sensitive nerve fibres is the cause of a light blow to the snout being able to kill the mole instantly. Roughly 130 years after Eimer's discovery, Catania and colleagues recorded in 2004 striking behavioural evidence in favour of his conclusions, using a high-speed camera. Moles with the help of their Eimer's organs may be perfectly poised to detect seismic wave vibrations. Structure: The organ consists of a minute skin papilla with 0.1–0.2 mm diameter. At the papilla's core, a geometric constellation of nerve fibres with free endings is embedded symmetrically in a column of epithelial cells. Eimer saw two to three single nerve fibres, rising straight in the middle of the column and ending in the fifth layer under the stratum corneum that forms the hard top of the epidermis. The fibres extend short protrusions perpendicularly into each epithelial layer they traverse, where the protrusions end in 'buttons'. They are ringed by a circle of roughly 19 evenly spaced nerve fibres, known as satellite fibres, whose protrusions point inwards. In addition, Eimer distinguished a separate set of nerve fibres with free nerve endings. By contrast to the fibres in the papilla's core, these travel obliquely toward the surface at the papilla's perimeter. Structure: With improved histological techniques, a second touch receptor type, Merkel cell-neurite complexes, was found in the stratum germinativum at the bottom of the epidermis, and a third, lamellated corpuscles of Vater and Pacini, was discovered in the stratum papillare of the dermis underneath the Merkel cells as published by Halata in 1975. Function: Today it is still not understood precisely how these receptors convert touch into the electrical signals that the nerve fibres transmit to the brain. Interesting are the properties of touch, e.g. frequency and force, to which the receptors respond and how their responsiveness changes with prolonged stimulation. The receptors can be functionally distinguished based on these features: The nerve fibers with free nerve endings The nerve fibers which end on Merkel cells adapt their responses to touch rapidly The nerve fibers which end in the lamellated corpuscles and which are considered slowly adaptingMarasco et al. attribute different functions to Eimer's two sets of free-ending nerve fibres in the star-nosed mole and the coast mole Scapanus orarius. The authors published micrographs of the organ and its innervation, depicting Eimer's free-ending fibers as well as the Merkel cell-neurite complexes and the Vater-Pacini corpuscles. Using a histochemical marker for a protein known to be involved in the processing of pain, they were able to label the nerve fibres at the perimeter of the papilla, suggesting that they are nociceptive, i.e. they respond to pain. By contrast, the fibres in papilla's core did not stain for the protein, suggesting that they are mechano-receptive. These nerve fibres as well as the Merkel cell-neurite complexes are known to respond to local touches with great sensitivity, whereas the Vater-Pacini corpuscles are highly tuned to the frequencies of dispersed vibrations. Eimer's organ, therefore, forms a receptor complex, integrating pain receptors as well as three fundamentally different types of touch receptors which preferentially respond to either skin indentations or vibrations. The follicles of whiskers, also known as vibrissae or sinus hairs, and the push rods in monotremes, as published by Proske et al., represent the only other known discrete structures in the skin that combine three mechanoreceptor types. Function: The Eimer's organs on the nose may be the mole's main tool with which the animal can capture a refined picture of its underground habitat. Catania and Kaas have shown that the nose of the star-nosed mole is mapped in multiple topographic representations on an extraordinarily large swath of cerebral cortex that processes touch. Discrete morphological modules of nerve cells that are clearly discernible in histologically stained sections represent each ray in the same order as they surround the nose. This topographic morphological representation of the sensory periphery is similar to that of the facial whiskers by cytoarchitectonic modules called barrels in the rodent cerebral cortex. Function: To date, two complete cortical maps of the nose with its rays have been found in the brain of the star-nosed mole. There may be more. The nose's disproportionate representation in cerebral cortex is suggestive of a fovea for nose touch in the mole's somatic sensory system, as published by Catania. Sources: Peter Melzer: 'The Beautiful Eimer's Organ,' 1 May 2010. K C Catania: 'Magnified cortex in star-nosed moles.' Nature 375, (1995): 453–454. K C Catania, F E Remple: 'Tactile foveation in the star-nosed mole.' Brain Behav Evol 63 (2004): 1–12. K C Catania, J H Kaas: 'Organization of the somatosensory cortex of the star-nosed mole.' J Comp Neurol 351 (1995): 549–567. T Eimer: 'Die Schnauze des Maulwurfs als Tastwerkzeug.' Arch Micr Anat 7, (1873): 181–201. Z Halata: 'The mechanoreceptors of the mammalian skin ultrastructure and morphological classification.' Adv Anat Embryol Cell Biol 50, (1975): 3-77. P D Marasco, P R Tsuruda, D M Bautista, D Julius, K C Catania: 'Neuroanatomical evidence for segregation of nerve fibers conveying light touch and pain sensation in Eimer's organ of the mole.' Proc Natl Acad Sci USA 103, (2006): 9339–9344. U Proske, J E Gregory, A Iggo: 'Sensory receptors in monotremes.' Philos Trans R Soc Lond B Biol Sci 353, (1998): 1187–1198.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McPlant** McPlant: The McPlant is a vegetarian (and in some regions vegan) burger sold by the fast-food chain McDonald's in several European countries. In 2021, McDonald's partnered with Beyond Meat, a Los Angeles–based producer of plant-based meat substitutes, to create the McPlant platform. It features a plant-based meat alternative burger patty made from plant ingredients such as potatoes, peas and rice.The McPlant was launched in the United Kingdom in January 2022, after tests in October 2021. It is also available in Ireland. In both the United Kingdom and Ireland, the burger is vegan due to the use of vegan sandwich sauce and a vegan cheese alternative. The McPlant is also sold in a non-vegan variant (with cheese and egg-based mayonnaise) in Austria, Germany, and Portugal, as well as in the Netherlands with cheese and a vegan sandwich sauce. When the McPlant was launched in Germany in February 2023, it replaced the Fresh Vegan TS burger, leading to some criticism from customers since it isn't vegan and is prepared on the same grill as meat products, making it not vegetarian either. McDonald's Germany targets flexitarians. McPlant: In January 2023, McDonald's launched the Double McPlant with two patties in the United Kingdom and Ireland. In Austria, McDonald's also sells the McPlant Steakhouse, a variant of the burger with steakhouse sauce. In Germany, it also sells McPlant Nuggets made from wheat and pea protein, which also aren't vegan or vegetarian since they are prepared in the same fryer as Chicken nuggets.In several other countries, the McPlant was tested but not introduced in the permanent menu. The first tests occurred in Sweden and Denmark between January and April 2021. In the United States, the product was initially tested in November 2021, with expanded tests in California and Texas from February 2022. The trial run of the McPlant in the United States was cancelled in August 2022, reportedly due to low sales. From July until November 2022, the McPlant was served in Victoria, Australia, as a limited run item.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Native American weaponry** Native American weaponry: Native American weaponry was used by Native American warriors to hunt and to do battle with other Native American tribes and European colonizers. Weaponry in what is now the United States and Canada: Weaponry for Native American groups residing in what is now the United States and Canada can be grouped into five categories: striking weapons, cutting weapons, piercing weapons, defensive weapons, and symbolic weapons. Striking weapons Native Americans used many variations of striking weapons. These weapons were mainly used for melee combat with other tribes. In some cases, these weapons were thrown for long-range attacks. Weaponry in what is now the United States and Canada: Stone clubs were made from a stone attached to a wooden handle. There were also variations of stone clubs where tribes would carve the club out of a solid piece of stone. The most common stone types that were used for stone clubs were chert and flint. There are indications that most of these solid stone clubs were used for ceremonial purposes, instead of actual battle. Weaponry in what is now the United States and Canada: Wooden clubs were commonly used by the woodland tribes. The clubs were carved from a solid piece of hardwood, like the wood from maple or oak. The earlier forms of wooden clubs were carved in the form of a ball at the end of a handle, but later forms were sometimes sharpened, resembling a wooden sword. Some forms had a sharp stone shard driven into the end of the club, almost like an axe. Weaponry in what is now the United States and Canada: The gunstock war club was mostly made from wood, but had a metal blade attached to the end of the club, like a spear point. The club was shaped like the stock of an 18th-century musket. The design of these gunstock clubs was directly influenced by the firearms that the European settlers used. Two popular theories for creating clubs in these shapes are that the Native Americans were impressed with how well the settlers used the ends of their firearms as striking weapons or they wanted to intimidate other tribes by giving the impression that they had firearms of their own.The war hatchet is very similar in design to a battle axe and was influenced by the axes that the European settlers used. The hatchet consisted of a sharpened blade, made from iron or stone, attached to the end of a handle. Weaponry in what is now the United States and Canada: The pipe tomahawk was a type of war hatchet that was also a smoking pipe. Tomahawks were used for close combat like most striking weapons but were also popular throwing weapons. The sharp edge was also used for skinning animals. With time, the pipe tomahawk became more ceremonial and was used more as a pipe than as a weapon. Cutting weapons Cutting weapons were used by the Native Americans for combat as well as hunting. Tribes in the present-day United States and Canada preferred shorter blades and did not use long-cutting weapons like the swords that the Europeans used at the time. Weaponry in what is now the United States and Canada: Knives were used as tools for hunting and other chores, like skinning animals. Knives consisted of a blade made of stone, bone, or deer antlers, fastened to a wooden handle. Later, Native American knives were also made from steel or iron, following the European settlers' weapon-making influences. Some tribes had already figured out the use of locally sourced copper and iron from meteorites and could fashion weapons out of these. Weaponry in what is now the United States and Canada: Piercing weapons Piercing weapons consisted of both short and long-range weapons. They were used for hunting and combat. Weaponry in what is now the United States and Canada: Spears were used by the Native Americans to thrust and strike their enemies or the animals they were hunting. The spears were made of a short blade or tip, made from stone, and attached to the end of a long wooden handle or shaft. Some variations did not even have a stone tip. Instead, the shaft was simply sharpened at one end. Spears could also be thrown as ranged weapons. Weaponry in what is now the United States and Canada: Lances were very similar to spears, but were designed specifically for use on horseback. Lances had longer shafts and tips than spears. This gave the user further reach, allowing them to stab an enemy from the top of a horse. Weaponry in what is now the United States and Canada: Atlatl, or spear-throwers, are long-range weapons that were used by Native Americans to throw spears, called darts, with power and accuracy. The Atlatl is made from a hollowed-out shaft with a cup at the end that holds a dart in place and propels it forward. The thrower's throwing arm is extended, allowing for more leverage than throwing with the hand. This allows the dart to be thrown with more velocity. Weaponry in what is now the United States and Canada: Bows and arrows were used by most cultures around the world at some point or another and are at least 8,000 years old. The arrow is created, similar to a spear, from a small blade (arrow tip) attached to one end of a wooden shaft. Attached to the other end are feathers that help stabilize the arrow's flight. Overall, an arrow is much smaller and lighter than a spear. The bow is made of wood (attempts have been made at bone, but the bone has a low tensile strength and snaps easily when pressure is applied to the ends, "authentic bows" made of bone is a fairly common scam) string is made from either the dried, twisted, strung out, and twisted again intestines of animals, bundled horse hair, fibers from nettle, or certain types of sinew. It is then attached to each end of the wood. Weaponry in what is now the United States and Canada: Defensive weapons Some Native American tribes carried shields into battle for extra protection. These shields were mostly made from leather stretched across a round wooden frame. War shields had the main purpose of stopping the smaller projectiles, such as arrows, and redirecting the larger projectiles such as spears. These shields were mostly carried by the men on horseback. These shields were made from buffalo neck leather and often had more than one layer of leather over one another. Symbolic weapons Many of the weapons that the Native Americans used served a more symbolic purpose. Weaponry in what is now the United States and Canada: Medicine shields look similar to war shields. However, the medical shield's purpose is to protect its carrier spiritually, rather than ward against physical attacks. Because these shields do not have to fend off physical attacks, they are built much thinner and lighter than the war shields. The medicine shields are often decorated with many symbols that represent the spiritual strength within the carrier. Weaponry in Mesoamerica and South America: Indigenous peoples in Mesoamerica and South America used many weapons similar to those in North America, including spears, bows and arrows, atlatl, clubs, daggers, and shields. However, several additional types of weapons were also used in combat. Weaponry in Mesoamerica and South America: Aztec Weaponry Mācuahuitl: A flat wooden staff or club with obsidian blades embedded in the edges. These weapons could be used to inflict either cutting wounds (with the obsidian blades) or to club an opponent unconscious (with the flat side). It has been loosely compared to a European broadsword, although others have argued that it is a distinct weapon from either swords or clubs. Both single-handed and two-handed versions were used. According to Spanish conquistadors, the mācuahuitl was deadly enough to decapitate a man, or even a horse. Weaponry in Mesoamerica and South America: Quauholōlli: A weapon similar to a mace, consisting of a hard ball attached to the end of a wooden stick. Ichcahuīpīlli: Thick, quilted armor consisting of cotton stitched between layers of cloth. The armor was designed to protect the wearer from blows from mācuahuitl or other clubs, as well as arrows and atlatl darts. Inca Weaponry Bolas (Quechua: Liwi): Weights mounted on the ends of interconnected cords. Used by the Inca army in battle. Slings (Quechua: Waraka): Slings were a fundamental long-distance weapon in the Inca army. They were typically constructed out of wool.Maces (Quechua: Champi): Weapons consisting of a heavy object mounted on the end of a wooden shaft. The head of the mace was often star-shaped and made of copper or bronze.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Power Nine** Power Nine: In Magic: The Gathering, Power Nine is a set of nine cards that were printed in the game's early core sets, consisting of Black Lotus, Ancestral Recall, Time Walk, Mox Pearl, Mox Sapphire, Mox Jet, Mox Ruby, Mox Emerald, and Timetwister.The Power Nine are considered to be among the most powerful cards in the game. All nine cards were printed only in the Alpha, Beta, and Unlimited sets in late 1993 and early 1994. They were of the highest rarity in each set they appeared in. A total number of 22,800 copies of each card were printed (not counting promotional releases). Currently, all of the Power Nine cards are restricted in the Vintage tournament format and banned in Legacy, the only tournament formats where they would be legal otherwise, and all except for Timetwister are banned in the Commander format. Cards: Black Lotus The "Black Lotus" card can be played at zero cost, and grants three mana (the game's primary resource) when sacrificed (discarded from play). Thus, the card gives the player an enormous jump in the early stages of a Magic game. Former Pro player and Magic writer Zvi Mowshowitz has declared Black Lotus as the best card of its type of all time, claiming every deck in the history of the game is better with a Black Lotus in it. It has since been banned from all official tournament formats save for Vintage, but even there, it is limited to one copy per deck, compared to the normal allowance of four.The reason this powerful card is a flower is attributed to Richard Garfield liking the idea of a lot of power being contained in a flower, a transient object in contrast to more permanent objects like rings or amulets that are often depicted as sources of power in other fantasy settings.Black Lotus is usually considered by collectors to be the most valuable non-promotional Magic card ever printed. Its Alpha and Beta versions in particular are considered to be extremely valuable, due to the more limited print runs and black borders of those sets. The Alpha version of Black Lotus is the rarest and most sought-after, with an estimated 1100 ever printed, followed by the Beta version, with 3300 ever printed. Although Black Lotus was highly-sought after early on it took a while for it to become the consensus most valuable card in the game. The first Scrye price guide from June 1994 listed the Alpha Shivan Dragon as the most valuable card in the game at a median $22 to Black Lotus's $15. In January 2021 a "gem mint" Alpha version of the Black Lotus in a case signed by the artist was sold for US$511,100 (equivalent to $551,960 in 2022) in an eBay auction. In March 2023, a graded 10 Alpha copy sold for $540,000. Cards: Moxes The five original Mox cards are: Mox Emerald Mox Jet Mox Pearl Mox Ruby Mox SapphireThey are colloquially known as "Moxen" or "Moxes". They are similar to the five basic lands (the cards that provide the primary resource to play most cards) in that they cost nothing to play and can add one mana of a specific color to their owner's resource pool. Unlike lands, however, more than one can be played per turn. Like Black Lotus, this can lead to extremely powerful plays much earlier than normal. All five Mox cards were illustrated by Dan Frazier. In each artwork, a different piece of jewelry is depicted. The word Mox is derived from Moxie, slang for courage, or as Richard Garfield interpreted it, energy. However, not all of the people involved with the creation of Magic may have known that fact: when Frazier asked art director Jesper Myrfors what a Mox was, he replied "Oh, we don’t know!" Ancestral Recall Ancestral Recall allows the player to either draw three cards or force the opponent to draw three cards, at an extremely low cost. It originated as part of a set of five cards known as "Boons", one of each color, which gave three of something (e.g. mana, life, damage) for the cost of one mana. Ancestral Recall is the only rare Boon and the only one not to have been reprinted since the Unlimited set. Cards: Time Walk Time Walk allows a player to take an extra turn for a low cost. In a game that involves a constant build-up of resources over time, a full turn's additional development turned out to be far more powerful than Magic's early designers had imagined. Several cards that grant additional turns have been printed since Time Walk, but always at a much greater cost. Cards: In Time Walk's early development version, it originally had the text "Target player loses next turn." Richard Garfield tells an anecdote about a playtester telling him that he had a card in his deck that would guarantee he would win the game on the next turn. Garfield could not figure out which card this could be, until the playtester showed him a Time Walk, and pointed out that the phrasing on this card was ambiguous, and it could also be interpreted as saying that another player was forced to lose the game. The wording was changed prior to the release of the game. Cards: Timetwister While the other Power Nine cards are simple in concept, Timetwister is more complex. It forces each player to shuffle their hand, graveyard, and library together and then draw a new hand of seven cards. Because it affects all players, it may not be apparent at first why Timetwister is a powerful card. Its power lies mostly in situations where the player playing it has fewer cards in his or her hand than the opponent, and has established a powerful board position—Timetwister does not affect cards in play. The player casting Timetwister can essentially catch up on cards in hand, and potentially get back powerful cards that were discarded, without giving up a dominant board position. Unlike the other cards in the Power Nine, Timetwister therefore requires a deck to be more carefully built in order to exploit its power. Magic Online: The Power Nine were not available for the first twelve years of Magic Online. They first appeared as a part of Cube Drafts, where players do not keep the cards for their collections after the conclusion of the event. In June 2014, Wizards of the Coast officially supported Vintage as a Magic Online sanctioned format, and Vintage Masters, a booster specifically providing essential parts of the Vintage format, including all Power Nine cards, was released for a limited period. The Power Nine cards appeared only in the premium foil slots of Vintage Masters boosters where they could be either foil or non-foil as a special rarity. On average it took 53 packs of Vintage Masters to open one piece of the Power Nine.The implementation of the Power Nine cards online is functionally identical to the original cards, but the cards are displayed with updated rules text. The versions originally released online feature different artwork and are displayed with a modern card frame. With exception of the Black Lotus, the illustrations are those that were originally given to the winners of the Vintage Championships as alternate Power Nine artworks. The Black Lotus received a new artwork by Chris Rahn. Magic Online: In December 2017 Vintage Masters drafts were reintroduced to Magic Online for a week (beginning 12 December and ending 19 December). In this case players could choose between two types of boosters, the classic Vintage Masters boosters and otherwise identical boosters that included Power Nine with the cards' original arts and borders. MTG Arena: Though some of the cards had appeared in earlier one-off events, the full power nine were introduced into arena with the October 2022 Alchemy update via the card Oracle of the Alpha. When Oracle of the Alpha, a creature, enters the battlefield, you "conjure the Power Nine into your library, then shuffle." Alternate versions: Parodies The Blacker Lotus was a satirical card in the parody Unglued set which produced four mana, although it required the user to physically tear the card up before use. Jack-in-the-Mox from the same set works like a regular Mox but produces either a random color of mana, or destroys itself, depending on a die roll. Mox Lotus, from the later Unhinged parody set, provides infinite mana of any color and immunity to mana-burn (now redundant due to rules changes), but costs fifteen mana to play. Alternate versions: Cards in homage to the Power Nine The beloved nature of the Power Nine within the game has occasionally motivated Wizards to create cards that are similar in name and effect to these cards. For example, homages to Black Lotus usually have "Lotus" in their name and produce three mana of a single color in most cases as a one-shot effect. Despite Wizards attempts to better balance the power level of cards evoking the original Power Nine in many cases these cards have proven to be of an extremely high power level themselves. As an example Mox Opal, an artifact which can be tapped for any mana, similar to the original Moxes, but only if the player has at least three artifacts in play, has been considered one of the most powerful cards in the Modern format for a long time and in January 2020 was banned from the format for being too powerful. Alternate versions: Alternate art The Power Nine are among the very few widely recognized cards never to have received updated artwork from their original printing. As a way to rectify this, since 2003, the winner of the annual Vintage Championship has received a unique, oversized Power Nine card featuring brand-new art. These prize cards are considerably larger than actual cards, and therefore cannot be used in play. The five Mox cards feature artwork that represent the settings of the Magic expansions released in their corresponding years. Their artist, Volkan Baga, has also illustrated two other Mox cards—Mox Opal and the reissued Mox Diamond—in the same style. The following cards have been given to the winners: 2003: Black Lotus to Carl Winter (Artwork by Christopher Rush) 2004: Timetwister to Mark Biller (Artwork by Mark Tedin) 2005: Ancestral Recall to Roland Chang (Artwork by Mark Poole) 2006: Mox Pearl to Travis Spero (Artwork by Volkan Baga) 2007: Mox Jet to Stephen Menendian (Artwork by Volkan Baga) 2008: Mox Ruby to Paul Mastriano (Artwork by Volkan Baga) 2009: Mox Emerald to Itou Hiromichi (Artwork by Volkan Baga) 2010: Mox Sapphire to Owen Turtenwald (Artwork by Volkan Baga) 2011: Time Walk to Mark Hornung (Artwork by Chris Rahn) 2012: Timetwister to Marc Lanigra (Artwork by Matt Stewart) 2013: Ancestral Recall to Joel Lim (Artwork by Ryan Pancoast) 2014: Mox Pearl to Mark Tocco (Artwork by Raoul Vitale) 2015: Mox Emerald to Brian Kelly (Artwork by Raoul Vitale) 2016: Mox Sapphire to Joseph Bogaard (Artwork by Raoul Vitale) 2016 EU: Mox Jet to Joan Anton Mateo (Artwork by Raoul Vitale) 2017 EU: Mox Ruby to Joaquín Solís (Artwork by Raoul Vitale)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Community-controlled game** Community-controlled game: A community-controlled game (CCG) is any video game featuring a single avatar that is controlled by more than one person. As the definition of a CCG refers to the way a game is played, rather than aspects of the game itself, a CCG can be played in any genre. A CCG will intake commands from multiple players and will issue them to the avatar. However, as players cannot predict what other players will input, the movements and actions of the avatar can be incredibly erratic depending on the game's input mode. Styles of play: Anarchy Anarchy-mode allows all commands by players to be issued in sequential order to the avatar. This method of gameplay allows all individuals the equal ability to influence the movement or actions of the avatar, but none the ability to control it with strategy or rationale. Anarchy-mode is very susceptible to Trolls, which makes narrative progress difficult with this style of gameplay. However, players of anarchy-based CCGs often play for the communal experience, rather than the game's story. These players will often collectively create a new narrative based on their unpredictable gameplay. Styles of play: Democracy Democracy-mode allows all commands within a set period of time to be collected, counted and subject to the majority rule. This method of gameplay will issue only the command that is voted for the most (within each block of time), allowing strategy and methodology to be used. While democracy-mode is effective in allowing narrative progress, it often requires planning, voting and agreement between the majority. Unlike anarchy-mode, democracy-mode is protected well against Trolls, unless the majority is made up of them. Current examples: Twitch Plays Pokémon On 12 February 2014, an anonymous programmer from Australia streamed a game of Pokémon Red onto the video game live streaming website Twitch. Rather than passively watching the streamer play, viewers of Twitch Plays Pokémon were able to actively participate by typing commands into the stream's chatbox. While the original Pokémon Red was a single-player role-playing game, the use of the streamer's code allowed the game to be re-mediated into a CCG. Community-controlled games may exist as either a remediated single-player game or as a game designed to be played by a community. Categories of CCGs: Remediations Currently, most CCGs exist as remediations of single-player games. To convert a single-player game into a CCG, the only requirement is for the controls to be opened up to multiple players. A remediation features changes solely to the gameplay mechanics, leaving everything else unchanged. Categories of CCGs: Dedicated CCGs A dedicated CCG is a game made specifically to be a community-controlled game. These games will be developed with an awareness of their audience, and the narration will address the community as a whole during tutorials or throughout the game. These games will either feature a third-person plural narrative, or what Game Theorist Chris Milando calls a fourth-person narrative.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TSC2** TSC2: Tuberous Sclerosis Complex 2 (TSC2), also known as Tuberin, is a protein that in humans is encoded by the TSC2 gene. Function: Mutations in this gene lead to tuberous sclerosis. Its gene product is believed to be a tumor suppressor and is able to stimulate specific GTPases. Hamartin coded by the gene TSC1 functions as a facilitator of Hsp90 in chaperoning of Tuberin, therefore preventing its ubiquitination and degradation in the proteasome. Alternative splicing results in multiple transcript variants encoding different isoforms of the protein. Mutations in TSC2 can cause Lymphangioleiomyomatosis, a disease caused by the enlargement of tissue in the lungs, creating cysts and tumours and causing difficulty breathing. Because Tuberin regulates cell size, along with the protein Hamartin, mutations to TSC1 and TSC2 genes may prevent the control of cell growth in the lungs of individuals. Cell Pathology: Cells from individuals with pathogenic mutations in the TSC2 gene display depletion of lysosomes, impairment of autophagy, and abnormal accumulation of glycogen. Defects in the autophagy-lysosome pathway are associated with excessive ubiquitination and degradation of LC3 and LAMP1/2 proteins. Signaling Pathways: Pharmacological inhibition of ERK1/2 restores GSK3β activity and protein synthesis levels in a model of tuberous sclerosis.The defective degradation of glycogen by the autophagy-lysosome pathway is, at least in part, independent of impaired regulation of mTORC1 and is restored by the combined use of PKB/Akt and mTORC1 pharmacological inhibitors. Interactions: TSC2 functions within a multi-protein complex knowns as the TSC complex which consists of the core proteins TSC2, TSC1, and TBC1D7. TSC2 has been reported to interact with several other proteins that are not a part of the TSC complex including:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Belly fetish** Belly fetish: A belly fetish (also known as a stomach fetish, or alvinolagnia) is a partialism in which an individual is sexually attracted to the midriff or belly. Description: The belly is widely considered as an erogenous region, meaning it holds multiple nerve endings that make it sensitive to various sensations. Therefore, having a belly fetish usually coincides with belly-related sexual acts including but not limited to touching/rubbing the belly region, using sex toys and other objects (e.g., food, candles, ice, feathers, massage oils) to stimulate the belly region, rubbing one's belly against a partner's belly, or licking or sucking the navel. For this reason, alvinolagnia often co-exists with navel fetishism (a.k.a., alvinophilia). Overall, the belly fetish is a form of partialism. Description: Belly-to-belly contact Individuals with alvinolagnia tend to enjoy having sexual intercourse in the missionary position given the position's heightened belly-to-belly contact between partners. It is theorized that this sexual desire for belly-to-belly contact is linked to the evolutionary need for ventral-ventral contact when being nursed as an infant or to entice feelings of being nurtured and loved. One participant of a social experiment involving belly-to-belly contact described the act as "a very intimate thing, even when it's not meant to be." The evolutionary need for ventral-ventral contact may also contribute to sexual arousal during objectively non-sexual belly-to-belly contact which may happen when hugging or cuddling while wearing skin-revealing clothing (e.g., crop top, bikini); taking part in some forms of partnered dance (e.g., bachata (dance)); or participating in sports involving belly-to-belly contact either due to the sports' nature (e.g., wrestling, mixed martial arts) or as a strategy for obtaining rest, breaking-up an opponent's rhythm, heightening camaraderie during play, and/or eliciting post-play celebration (e.g., boxing, beach volleyball). Cultural background: Western culture Some assume that alvinolagnia is a cause of the prevalent western fashion of female midriff exposure. In the Victorian era, a small waist was considered the main trait of a beautiful woman. The advent of bikinis in 1946, the cheerleading fashion of the 1970s and low-rise fashion started in the early 1990s have contributed to widespread fascination of the belly region. Specific breakthroughs of the belly region being featured in American media include Cher in the 1970s "The Sonny & Cher Comedy Hour," as well as the character Ariel in Disney's The Little Mermaid (1989 film). Midriff exposure also became common in the culture of 20th-century music with many famous female pop stars appearing on and offstage and in music videos with their midriff exposed. Cultural background: Some get attracted to women wearing a crop top or bikini.Despite the prevalence of alvinolagnia, midriff exposure, and sexual belly-to-belly contact throughout Western pop culture, it is rare for belly-to-belly contact to be featured in Western media under a non-sexual tone. Nonetheless, non-sexual belly-to-belly contact in Western media generally represents either the establishment of a non-sexual friendship or the strengthening of an existing bond between two people. For example, the North American sitcom Will & Grace features two characters, Jack and Karen, who initiate and periodically bolster their long-lasting friendship via non-sexual belly-to-belly touch, a quirk so infamous that it appeared on the show's holiday special. More recently, non-sexual belly-to-belly touch became a key characteristic of Bayley and Sasha Banks' The Boss n' Hug Connection, a former women's professional wrestling tag team known for engaging in a post-match celebration involving belly-to-belly hugs. Cultural background: Middle Eastern culture The eastern art of belly dancing places the female midriff on center stage. The dance movements of the torso are considered to be seductive. Cultural background: Indian culture The bare female midriff is considered attractive and erotic in India. Baring the midriff has always been a fashion in Indian women attire. Indian women have traditionally worn saris that bares the midriff, especially South Indian women. The exposure of midriff in a sari is considered to be erotic. The midriff is revealed in other traditional female attires like Ghagra choli. Belly chains known as kamarband in India when worn with low-rise saris and lehengas are considered sensuous. Most Indian women wear belly chains during weddings and other ceremonies as a show of culture and tradition. Nowadays, women have been pairing these chains with western outfits, mostly to draw attention to their figures.Men are intrigued by the demure floor-length attire and tantalising display of a bare midriff in the back. Indian actress Ileana D'Cruz had commented that there were shots where a big porcelain seashell was thrown on her belly and flowers decorated around her waist during the shoot of her debut film and stated that the belly and navel is supposed to be a mark of a woman's beauty in South Indian films and they believe that the waist line is the most attractive part. Indian Singer Chinmayi once tweeted against a fan's request for sarees during performances, saying, "Groups of men...take photographs of my waist + side of my chest, circle it and upload it on soft porn websites.", "I get messages on how they're masturbating to it."Some Indian men are aroused by pinching a woman on her midriff bared by the sari. This scenario was depicted in an advertising campaign for a leading constructions company group in India. With the tagline "Everything you love, is in arm's reach", it featured a man at office extending his arm out to pinch his wife's midriff at home, with her expressing joy by smiling and biting her lower lip. It was featured as a full page advertisement in Dec 6, 2013 Chennai issue of the Times of India. Cultural background: Accessories and tattoos Some people wear accessories like belly chains, navel piercings, tattoos etc., to enhance the appearance of the belly. It can be a delicate thin or heavy thick chain.Managing editor of digital of Canadian magazine Flare Rebecca Perrin stated in an article, "a woman's waist and hips are two of the most physically attractive body parts there are – emphasizing them shouldn't immediately be considered a faux pas and should instead be encouraged." Celebrities like Beyoncé, Rihanna, Miley Cyrus etc. are known for flaunting their belly chains.Navel piercing and navel tattoos have become more common among young women. The trend of piercing or tattooing the navel became popular in the 1990s. It is popular among middle-aged women. Some belly chains attach to a navel piercing; they are called "pierced belly chains". Cultural background: Similar to navel piercings, hip piercings are also popular among women to express a bold personality.Some get stomach tattoos to attract attention of the onlookers, but these tattoos are more commonly preferred by women. There are many variations in design, from tribal to flowers. Some women even get these tattoos drawn on their lower backs and flaunt them in low-rise jeans, shorts or skirts.Sometimes, looser clothing such as scarves or skirts around the female waist and curves can be sexually appealing. Scarves wrapped around the waist are common among belly dancers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blue Chairs** Blue Chairs: Blue Chairs is an interactive fiction game by American author Chris Klimas. Plot: The piece opens at a party, where a man offers the player a bottle of a mysterious green fluid. After drinking it, the PC passes out, but is shortly awoken by a man bringing a phone message from a long-lost love. The game then explores the player's experiences of what may be a hallucination, and may be reality. Notable segments include a fantasy about being elected President in the desert, some "wine" which enhances dancing skills, and a network of tunnels hidden in the back of a freezer. History: The genre of Blue Chairs is considered to be a modern-themed interactive fiction. The designer of Blue Chairs is Chris Klimas, who developed the game using the Inform programming language designed by Graham Nelson. The game was released as freeware in 2004. Reception: Blue Chairs claimed the #2 prize at the Interactive Fiction Competition 2004, praised for its inventive style and rich storytelling. Subsequently, it received the awards for Best Game, Best Writing, and Best Story at the annual Xyzzy Awards. It was also nominated for Best Individual Puzzle, Best NPCs, and Best Individual PC. It was ranked as #34 in the 2011 edition of the Interactive Fiction Top 50 of all time. In 2016, author Adam Cadre analyzed Blue Chairs in his Radio K podcast.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yizhi capsule** Yizhi capsule: Yizhi capsule (YZC) is a type of traditional Chinese medicine (TCM) developed for treating vascular dementia. Studies: A study in rats concluded that Yizhi capsule (YZC) has an action of improving learning and memory disorder, and good protective effect on Aβ25–35 induced neurotoxicity in SD rats. Another study assessed the efficacy of Yizhi capsule in treating senile dementia. The results showed that Yizhi capsule could remarkably increase mini-mental state examination (MMSE) and Hamilton Depression Scale (HDS) marks of patients with vascular dementia. Moreover, Yizhi capsule improved cerebral blood flow, brain electrical activity monitoring (BEAM) and hemorheological indexes especially for the abnormal case. Another study mentions that the raised score in the revised Hasegawa dementia scales (HDS) demonstrated that the effect of Yizhi capsules in treating loss of intellectual function after cerebrovascular diseases was significantly better than that of the drug Piracetam. The study concluded that the data indicate that Yizhi capsule is a relatively good preparation for prevention of vascular dementia.A Cochrane review in 2007, however, concluded that none of the studies conducted up to that time used methods that could provide strong evidence that the capsules are an effective treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GLOP** GLOP: GLOP (the Google Linear Optimization Package) is Google's open source linear programming solver, created by Google's Operations Research Team. It is written in C++ and was released to the public as part of Google's OR-Tools software suite in 2014.GLOP uses a revised primal-dual simplex algorithm optimized for sparse matrices. It uses Markowitz pivoting to reduce matrix fill-in, steepest-edge pricing to avoid degenerate pivots, and an LU decomposition tailored for sparse matrices. GLOP: Inside Google, GLOP is used to stabilize YouTube videos and outside Google, it has been used to perform fast linear relaxations for reinforcement learning.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metasfresh** Metasfresh: metasfresh is an open source/free ERP software designed and developed for SMEs. metasfresh is an actively maintained fork of ADempiere and can be used and distributed freely. It does not require a contributor license agreement from partners or contributors. There is no closed source code, and the planning and development happen openly in the community. metasfresh was included in the Top 9 Open Source ERP to consider by opensource.com. History: In September 2006, the founders of metasfresh started with Open Source ERP development as early contributors in the ADempiere ERP Project. They were founding members of the ADempiere Foundation and longtime members of Functional and Technical Team at ADempiere. In industry-specific ERP projects in the SME sector they developed several new features based on ADempiere 3.5.4 and rewrote the majority of ADempiere Code to allow a more maintainable, flexible and scalable Software for midsize companies. The user base they built up demanded shorter and more reliable release cycles to allow more flexibility in providing solutions for their requirements. This, plus the already resulted gap in development compared to the latest ADempiere Codebase was the reason for the team to decide in 2015 to officially fork from ADempiere and proceed the development in a new project called metasfresh. History: Since releasing the code to the public on October 6, 2015, the community and development activity has risen quickly. Despite the fork's young age, metasfresh is currently one of the most active Open Source ERP Projects worldwide according to OpenHUB Statistics. Technology: Software & Architecture metasfresh is written in Java, JavaScript scripting language and works with PostgreSQL database management system. The development repository is publicly available on GitHub. It is composed of Client and Server components. The main Client is a Java Swing User Interface and available for production environments. Currently a new Web Interface is under development. Technology: Used Technologies: Web-Frontend: HTML5, PostCSS, JavaScript, React, Redux Java-Frontend: Java 8, Swing Java Application Server: Tomcat, Spring Framework, OpenJDK, JasperReports Database: PostgreSQL 9.5 Integration: ServiceMix, RabbitMQ, ActiveMQ, Camel API: REST, JSON, Swagger, Spring Framework, Hazelcast, Elasticsearch, Kibana Mobile Application: Vaadin Business functionalities/ features The feature List of metasfresh covers the majority of requirements of medium-sized enterprises for ERP Software and is comparable with proprietary ERP Systems. Differences to the ADempiere Project: After the fork from Compiere, the ADempiere community followed the open-source model of the Bazaar described in Eric Raymond's article The Cathedral and the Bazaar. The community and codebase were growing fast. The development mainly relied on the architecture inherited from Compiere, which had a tight coupling to the database. The architecture in combination with fast growing complexity leads into longer taking release cycles. Additionally, the license of ADempiere is GPL 2. Open Source projects with licenses compatible to GPL 2 are decreasing, so further development will more and more have to rely on own development which is a threat to a competitive development of Open Source enterprise software. Differences to the ADempiere Project: With the fork, metasfresh is choosing a different approach. The main aims of the project are: Quality Assurance: Building a modern architecture and decoupling the application from the data layer. The aim is to allow to extend the automatic Testing possibilities which are a prerequisite for shorter release cycles with extending functionality. Legal: Completely rewriting the ADempiere code, to allow to switch the license from GPL2 to GPL3 to allow to choose among a larger amount of modern Open Source projects for further incorporation and development. Efficiency: Consequent usage of Tools to enable efficient work from requirements analysis over development and testing until build and deployment. Flexibility: Provide a highly flexible framework for business processes based on a new disposition framework which allows having functional extension points to allow external systems to bind with metasfresh ERP.Currently, the time between stable releases including bug fixes and new features is 1 week according to the project's release notes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sex differences in autism** Sex differences in autism: Sex and gender differences in autism exist regarding prevalence, presentation, and diagnosis. Sex differences in autism: Men and boys are more frequently diagnosed with autism than women and girls. It is debated whether this is due to a sex difference in rates of autism spectrum disorders (ASD) or whether females are underdiagnosed. The prevalence ratio is often cited as about 4 males for every 1 female diagnosed. Other research indicates that it closer to 3:1 or 2:1. One in every 42 males and one in 189 females in the United States is diagnosed with autism spectrum disorder. There is some evidence that females may also receive diagnoses somewhat later than males; however, thus far results have been contradictory.Several theories exist to explain the sex-based discrepancy, such as a genetic protective effect, the extreme male brain theory and phenotypic differences in the presentation between sexes, which may all be intertwined. Researchers have also debated whether a diagnostic gender bias has played a role in females being underdiagnosed with autism spectrum disorder. Researchers have also speculated a gender bias in parental reporting due to the expectations and socialization of gender roles in society.Since autism is a largely genetic and hereditary condition, genetic factors that lead to differences depending on sex come into play, such as the role of androgen signaling in male development or X-linked mutations, whose associated genetic conditions are typically more common and severe in males. The extreme male brain theory suggests that autistic brains show an exaggeration of the features associated with male brains, such as increased size and decreased relative connectivity as well as systematic thinking over empathetic thinking. The imprinted brain hypothesis suggests genomic imprinting is at least partly responsible for the sex differences in autism and points to the evidence for a common genetic cause with schizophrenia.Compared to men, women are generally required to be more impaired by their autism or have more cognitive or behavioral conditions than their male counterparts to meet autism spectrum criteria. There is evidence of increased incidence of social anxiety, anorexia nervosa and self-harm in autistic females, though the increased rates of anorexia nervosa and other eating disorders may be due to confusion or conflation with avoidant/restrictive food intake disorder (ARFID), which is particularly common in autism. Sex differences in autism: Autistic girls and women show higher social motivation and a greater capacity for typical friendships than autistic boys and men, are less likely to be hyperactive, impulsive, have issues with conduct or stereotyped behavioral traits, and have been shown to mask their autistic behaviors and social difficulties more frequently than autistic men. Autistic males often exhibit more easily observed behaviors at a younger age resulting in parental observance and subsequent evaluation of the child. In contrast, behavior of young females is more often overlooked, regardless of any associated at-risk factors for ASD or other developmental delays. Ultimately, this may contribute to females more frequently receiving their ASD diagnosis later in life than their male counterpart. There is a growing consensus among neuroscientists that the number of autistic women has been vastly underrepresented due to the assumption that it is primarily a male condition. Background: Hans Asperger was one of the first people to study autism, with all of his four study subjects being male. Another early researcher, Leo Kanner described "autistic disturbances of affective contact" in the group consisting of eight boys and three girls.Today, Autism Spectrum Disorder is commonly defined as a neurological developmental disorder with symptoms of poor social communication, repetitive behaviors, sensory sensitivities, executive dysfunction, and hyper-fixations. In the modern day, women are less likely to be diagnosed as autistic than men; they are often misdiagnosed or not noticed to be neurodivergent by doctors. Women are also more likely to be diagnosed as autistic at a later age than men. This discrepancy in diagnoses is believed to be caused at least partially by camouflaging, a common autistic phenotype presented in females, which hides autistic traits. Theories explaining gender diagnosis disparity: Extreme male brain theory Extreme Male Brain Theory is an extension of the Empathizing-Systemizing Theory, which categorizes people into 5 different groups based on their empathizing and systemizing expressions. In the general neurotypical population, females have a greater ability to empathize, and males have a greater ability to systemize. Simon Baron-Cohen's extreme male brain theory states that autistic males have higher doses of prenatal testosterone and on average have a more systemizing brain, as opposed to the more empathizing female brain. He suggests that autistic brains show an exaggeration of the features associated with male brains. These are mainly size and connectivity, with males generally having a larger brain, which is seen in an exaggerated form in those with ASD. Individuals with ASD were found to have widespread abnormalities in interconnectivity and general functioning in specific brain regions. This could explain the different results on empathy tests between men and women as well as the deficiencies in empathy seen in ASD, as empathy requires several brain regions to be activated which need information from many different areas of the brain. Baron-Cohen therefore argues that genetic factors play a role in autism prevalence and that children with technically minded parents are more likely to be diagnosed with autism. Although autistic females have been documented to have higher testosterone levels, which could support the Extreme Male Brain theory, not all autistic females show male-specific symptoms, leaving the Extreme Male Brain theory with Autism Spectrum Disorder to be controversial. Theories explaining gender diagnosis disparity: Imprinted brain hypothesis The imprinted brain theory suggests genomic imprinting is at least partly responsible for the sex differences in autism and implicates schizophrenia as well, claiming that genetic and physiological evidence suggests the two conditions are on a spectrum in which some mutations in certain genes cause lower social cognition but higher practical cognition (autism) while other mutations in the same genes cause lower practical cognition with higher social cognition (schizophrenia). Theories explaining gender diagnosis disparity: Female protective effect hypothesis According to the female protective effect hypothesis, more extreme genetic mutations are required for a girl to develop autism than for a boy. In 2012, Harvard researchers published findings suggesting that, on average, more genetic and environmental risk factors are required for girls to develop autism, compared to boys. The researchers analyzed DNA samples of nearly 800 families affected by autism and nearly 16,000 individuals with a variety of neurodevelopmental disorders. They looked for various types of gene mutations. Overall, they found that females diagnosed with autism or another neurodevelopmental disorder had a greater number of harmful mutations throughout the genome than did males with the same disorders. Women with an extra X chromosome, 47,XXX or triple X syndrome, have autism-like social impairments in 32% of cases. Theories explaining gender diagnosis disparity: Hypothesis of female under-diagnosis The prevalence ratio is often cited as about 4 males for every 1 female diagnosed. Other research indicates that it closer to 3:1 or 2:1.Some authors, clinicians and experts like Judith Gould, Tony Attwood, Lorna Wing and Christopher Gillberg have proposed that autism in females may be underdiagnosed due to better natural superficial social mimicry skills in females, partially different set of symptoms and less knowledge about autism in females among experts. In his preword to the book Asperger's and Girls, Attwood writes: "These tentative explanations for the apparent underrepresentation of girls with Asperger's Syndrome have yet to be examined by objective research studies."Specifically, Gould has discussed the idea that a pervasive developmental disorder called pathological demand avoidance, which is not officially included in diagnostic manuals, may offer a glimpse into how autism in females may present in some cases.Another clinician, William Mandy, hypothesized referrals for ASD assessment are often started by teachers. Girls with ASD may sometimes lack the skills of social communication and this is not noticed until they are in a school setting. Therefore, girls suggested to have ASD may receive delayed or no clinical assessment. Compared with males, females with autism are more likely to mask their restricted interests (strong or intense interests in specific topics or objects), which could decrease the chances of diagnosis. Theories explaining gender diagnosis disparity: Female phenotype Some have suggested a differential phenotype for autistic women; "a female-specific manifestation of autistic strengths and difficulties, which fits imperfectly with current, male-based conceptualisations" of autism. Autistic women have been shown to score higher in self-reports of autistic masking, which may factor into the different phenotype. One study found evidence for a diagnostic bias against girls who meet criteria for ASD. In some cases where females showed severe autistic traits, they failed to meet the criteria for a diagnosis, because of the lack of sensitivity to the female phenotype. Theories explaining gender diagnosis disparity: Camouflaging The DSM-5 mainly looks at two categories of autism spectrum symptoms when diagnosing someone: social deficits and restricted/repetitive behaviors and interests. Both of these categories of symptoms can be hidden by an aspect of the autistic female phenotype known as camouflaging.Autistic girls tend to camouflage more than boys, this leads to many of their symptoms being hidden and not noticed by professionals. When it comes to social camouflaging, there are three sub-categories according to the Camouflaging Autistic Traits Questionnaire (CAT-Q): Masking, Assimilation, and Compensation. Masking is the act of constantly monitoring one's behavior in order to hide one's autistic traits and/or putting on a fake persona. Assimilation is known as "hiding in plain sight" or trying to blend in with non-autistic peers. Finally, compensation is trying to over-compensate for a lack of social abilities. Examples of this can include mimicking real or fictional people, over exaggerating non-verbal expressions, and creating scripts or rules when having a conversion with someone.Camouflaging can also be used to hide repetitive/restricted behaviors and interests. In fact, researchers have found that autistic girls are ten times more likely to not originally meet the DSM-5 criteria for restricted/repetitive behaviors. Sensory overstimulation is another autistic trait that can be hidden by masking. Participants of the Hull, et al., would internalize their overwhelming feelings and try to channel it through small and unnoticeable everyday objects. If those objects were not enough to calm them down, then they would try to leave the environment and recuperate by making " regular excuses'' as to why they needed to leave. Theories explaining gender diagnosis disparity: Downfalls of camouflaging Studies have shown that high levels of camouflaging is can lead to higher levels of anxiety and depression and can increase the risk of suicidal ideation. Studies have also found that camouflaging can lead to a skewed sense of self. This is especially the case for people who had been masking and mimicking other people for long periods of time. Another factor of masking is mental and physical exhaustion after a camouflaging session. According to the participants of the Hull, et al (2017) study, the longer that autistic individuals camouflage, the worse the exhaustion becomes and the longer these individuals need to rest and recharge. This study had also found that there were increased amounts of anxiety and stress revolving around camouflaging because the participants were often worried that they did not mask enough, did not mask correctly, or did not reach the desired effects of masking in that camouflaging session. Another one of the factors that increased anxiety and exhaustion while camouflaging is the fact that it "involved a constant monitoring of the situation, as if training oneself in self-monitoring, self-awareness, and monitoring others' reactions, both during and after the interaction occurred." Differences in gender and sexuality identification: Sexuality is often discussed within the autistic community, with many observations that identities other than cis-hetero seem to be more common than is observed in the neurotypical population. There have not been many formal studies on this to date, however members of the community speculate that autistic individuals generally have different ideals, perceptions and desires than neurotypicals or simply do not comprehend or agree with society's expectation, making them more apt to diverge from the norm. Differences in gender and sexuality identification: A study looking at the co-occurrence of ASD in patients with gender dysphoria found 7.8% of patients to be on the autism spectrum. Another study consisting of online surveys that included those who identified as nonbinary and those identifying as transgender without diagnoses of gender dysphoria found the number to be as high as 24% of gender diverse people having autism, versus around 5% of the surveyed cisgender people. A possible hypothesis for the correlation may be that autistic people are less willing or able to conform to societal norms, which may explain the high number of autistic individuals who identify outside the stereotypical gender binary. As of yet, there have been no studies specifically addressing the occurrence of autism in intersex individuals. Differences in gender and sexuality identification: A study conducted by Byers and Nichols (2014) explored the level of sexual satisfaction of high-functioning autistic individuals, with researchers testing the sexual and relationship satisfaction of neurotypical versus high functioning autistic individuals. The results suggest that men with ASD are generally less satisfied with their relationship or marriage compared to neurotypical men and women, and women with ASD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Herbrand's theorem** Herbrand's theorem: Herbrand's theorem is a fundamental result of mathematical logic obtained by Jacques Herbrand (1930). It essentially allows a certain kind of reduction of first-order logic to propositional logic. Although Herbrand originally proved his theorem for arbitrary formulas of first-order logic, the simpler version shown here, restricted to formulas in prenex form containing only existential quantifiers, became more popular. Statement: Let (∃y1,…,yk)F(y1,…,yn) be a formula of first-order logic with F(y1,…,yn) quantifier-free, though it may contain additional free variables. This version of Herbrand's theorem states that the above formula is valid if and only if there exists a finite sequence of terms tij , possibly in an expansion of the language, with 1≤i≤r and 1≤j≤n ,such that 11 ,…,t1n)∨…∨F(tr1,…,trn) is valid. If it is valid, it is called a Herbrand disjunction for (∃y1,…,yn)F(y1,…,yn). Informally: a formula A in prenex form containing only existential quantifiers is provable (valid) in first-order logic if and only if a disjunction composed of substitution instances of the quantifier-free subformula of A is a tautology (propositionally derivable). Statement: The restriction to formulas in prenex form containing only existential quantifiers does not limit the generality of the theorem, because formulas can be converted to prenex form and their universal quantifiers can be removed by Herbrandization. Conversion to prenex form can be avoided, if structural Herbrandization is performed. Herbrandization can be avoided by imposing additional restrictions on the variable dependencies allowed in the Herbrand disjunction. Proof sketch: A proof of the non-trivial direction of the theorem can be constructed according to the following steps: If the formula (∃y1,…,yn)F(y1,…,yn) is valid, then by completeness of cut-free sequent calculus, which follows from Gentzen's cut-elimination theorem, there is a cut-free proof of ⊢(∃y1,…,yn)F(y1,…,yn) Starting from above downwards, remove the inferences that introduce existential quantifiers. Remove contraction-inferences on previously existentially quantified formulas, since the formulas (now with terms substituted for previously quantified variables) might not be identical anymore after the removal of the quantifier inferences. Proof sketch: The removal of contractions accumulates all the relevant substitution instances of F(y1,…,yn) in the right side of the sequent, thus resulting in a proof of 11 ,…,t1n),…,F(tr1,…,trn) , from which the Herbrand disjunction can be obtained.However, sequent calculus and cut-elimination were not known at the time of Herbrand's theorem, and Herbrand had to prove his theorem in a more complicated way. Generalizations of Herbrand's theorem: Herbrand's theorem has been extended to higher-order logic by using expansion-tree proofs. The deep representation of expansion-tree proofs corresponds to a Herbrand disjunction, when restricted to first-order logic. Herbrand disjunctions and expansion-tree proofs have been extended with a notion of cut. Due to the complexity of cut-elimination, Herbrand disjunctions with cuts can be non-elementarily smaller than a standard Herbrand disjunction. Herbrand disjunctions have been generalized to Herbrand sequents, allowing Herbrand's theorem to be stated for sequents: "a Skolemized sequent is derivable iff it has a Herbrand sequent".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inferior ligament of epididymis** Inferior ligament of epididymis: The inferior ligament of the epididymis is a strand of fibrous tissue which is covered by a reflection of the tunica vaginalis and connects the lower aspect of the epididymis with the testis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CD109** CD109: CD109 (Cluster of Differentiation 109) is a human gene.CD109 is a GPI-linked cell surface antigen expressed by CD34+ acute myeloid leukemia cell lines, T-cell lines, activated T lymphoblasts, endothelial cells, and activated platelets (Lin et al., 2002). In addition, the platelet-specific Gov antigen system (HPA15), implicated in refractoriness to platelet transfusion, neonatal alloimmune thrombocytopenia, and posttransfusion purpura, is carried by CD109 (Kelton et al., 1990; Lin et al., 2002).[supplied by OMIM]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Partial monosomy 13q** Partial monosomy 13q: Partial monosomy of chromosome 13q is a monosomy that results from the loss of all or part of the long arm of chromosome 13 in human beings. It is a rare genetic disorder which results in severe congenital abnormalities which are frequently fatal at an early age. Up until 2003, more than 125 cases had been documented in medical literature. Symptoms and signs: Symptoms vary from case to case, and may correlate to how much of the chromosome is missing. Symptoms that are frequently observed with the condition include: Low birth weight Malformations of the head Eye abnormalities Defects of the hands and feet, polydactyly Reproductive abnormalities (males) Psychological and motor retardation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Generative second-language acquisition** Generative second-language acquisition: The generative approach to second language (L2) acquisition (SLA) is a cognitive based theory of SLA that applies theoretical insights developed from within generative linguistics to investigate how second languages and dialects are acquired and lost by individuals learning naturalistically or with formal instruction in foreign, second language and lingua franca settings. Central to generative linguistics is the concept of Universal Grammar (UG), a part of an innate, biologically endowed language faculty which refers to knowledge alleged to be common to all human languages. UG includes both invariant principles as well as parameters that allow for variation which place limitations on the form and operations of grammar. Subsequently, research within the Generative Second-Language Acquisition (GenSLA) tradition describes and explains SLA by probing the interplay between Universal Grammar, knowledge of one's native language and input from the target language. Research is conducted in syntax, phonology, morphology, phonetics, semantics, and has some relevant applications to pragmatics. Generative second-language acquisition: Some of the main questions in GenSLA include: whether UG is available to the adult L2 learner to guide acquisition and to what extent; whether L2 learners can reset linguistic parameters from their L1; whether second-language learners experience difficulties interfacing between different modules of the grammar; whether child second language acquisition differs from that of adults.As generative second language research endeavours to explain the totality of L2 acquisition phenomena, it is also concerned with investigating the extent of linguistic transfer, maturational effects on acquisition, and why some learners fail to acquire a target-like L2 grammar even with abundant input. Furthermore, studying L2 acquisition through a generative lens give linguists a better idea of the natural constraints on human languages and the inner workings of Universal Grammar.Research in generative second-language acquisition is presented at a range of conferences, including: GASLA (Generative Approaches to Second Language Acquisition), GALANA (Generative Approaches to Language Acquisition - North America), and BUCLD (Boston University Conference on Language Development). Generative second-language acquisition: Prominent researchers of the topic include Suzanne Flynn of MIT, Bonnie Schwartz of University of Hawaii, Antonella Sorace of University of Edinburgh, and Lydia White of McGill University. History: Pre-GenSLA: 1960s-1970s In the late 1960s-early 1970s researchers observed that the language and errors of L2 learners were not random but systematic and evidence of rule-governed behaviour. From this observation researchers proposed the concept of interlanguage which refers to the language system used by L2 learners that contains interacting linguistic aspects of both the L1 and L2. This system theory regarding the interlanguage suggests that L2 learners have mental grammars that can be described with rules and principles. History: The Beginnings of GenSLA:1980-1990s The history of GenSLA research begins in the 1980s prompted by two interconnected questions: The logical problem of language acquisition How the logical problem of language acquisition applies to L2 acquisition in adulthood.The logical problem of language acquisition refers to the observable mismatch between the primary linguistic data (PLD) or language specific input a child is exposed to and the state of their eventual language system, that is, children appear to acquire their native language quickly and with little negative feedback even when the input is uneven, inconsistent and unrepresentative of their ultimate linguistic competence. Some suggest in an argument commonly known as Poverty of the Stimulus (POS) that there are, in fact, certain properties of language that are too abstract, subtle and complex to be acquired by language input and the operation of domain general cognitive mechanisms alone. Similarly, children are not exposed to a rich wealth of linguistic data to be able to acquire all the rules and principles of their distinct language. Therefore, an extra component, such as the UG which consists of innate domain-specific linguistic knowledge, is needed to account for these POV properties.Subsequently, starting from the assumption of UG GenSLA researchers asked how the problem of language acquisition applies to L2 acquisition in adulthood. This encompassed questions about what similarities and differences exist between child L1 acquisition and adult L2 acquisition and, in particular, whether or not adults also have access UG. Indeed, most theories and research in the first two decades of GenSLA actually revolved around this singular question to which there are four proposed answers: L2 learners have direct or full access to UG L2 learners have partial access to UG L2 learners have indirect access to UG L2 learners have no access to UG.GenSLA researchers assumed during these early decades that if they could show that a particular POS property operated or did not operate in L2 grammar they could generalize to other POS properties and to UG accessibility or non-accessibility in general. Because an L2 learner's L1 contains UG information available for transfer to their L2 it was thought that the strongest case for L2 access to UG would be evidence of knowledge in L2 learners that constituted instances of POS properties that are non-transferable. In other words, linguistic knowledge that could not be learned from L2 input, explicit learning, transfer from L1 knowledge, or the operation of domain general cognitive mechanisms. History: Feature Focused: Late 1990s-early 2000s The field of GenSLA research experienced significant theoretical developments in the late 1990s/early 2000s following changes in generative linguistic theory inspired by Chomsky's minimalist program. These changes shifted the debate from questions solely about access to UG to the consideration of specific features in L2 grammars and how they are represented. The features under consideration here are linguistic units that reflect grammatical meanings such as tense, case, number, person, gender, or conceptual meanings such as evidentiality, habitual aspect and definitenessOne key characteristic of these features is that they reflect variation across languages in their overtness, which became particularly important to GenSLA research. A feature of a word or phrase is said to be overt if there is surface evidence of its existence within that word or phrase. By contrast, a feature of a word or phrase is said to be covert if there is no surface evidence of its existence within that word or phrase. This made interesting predictions about adult L2 learning behaviour, for example, that L2 overt morphology should be easier to acquire if the learner has similar overt features in their L1. In one relevant study it was shown that Russian but not Japanese L2 learners of English were, in line with these predictions, reliably sensitive to English plural errors, (Russian has overt plural morphology while Japanese does not).Another important element of these features for GenSLA research is interpretability. A feature is said to be interpretable if it contributes to sentence meaning and uninterpretable if it has grammatical significance only. This predicted that only meaningful features should be accessible to adult L2 learners and purely grammatical features should not be accessible for L1 transfer. No access and partial access theories sometimes adopted this distinction, arguing that it explains much variation attested in adult L2 grammars. For example, that Chinese speakers learning English as an L2 often omit third-person singular agreement morphology in obligatory contexts could easily be explained because these features are uninterpretable in Chinese. History: New Populations: 2000s Onwards By the 2000s it was generally accepted that adult SLA differed from child L1 acquisition in process and typical outcomes and there was evidence for adult accessibility in at least some properties of UG. This motivated GenSLA theory to shift focus from questions just about UG accessibility and specific features to describing and explaining variation at group and individual levels. The last decade has also seen a significant increase in GenSLA studies that examine SLA in populations complementary to L2 acquisition including heritage bilingualism, child L2 acquisition, and multilingual acquisition to gain new insights into the latter. For example, it was found that heritage bilinguals diverge from monolinguals in the ultimate state of their eventual language system in ways similar to adult L2 learners even though they are native speakers and even when the learning process takes places in a naturalistic setting in early childhood. This casts doubt on the critical period hypothesis (CP) that age is the determining factor in convergent language acquisition, another rich area of debate in GenSLA research With respect to child L2 acquisition, it was hypothesized that if child and adult L2 learners follow the same developmental path this would call into question the claims made by some GenSLA researchers that differences between L1 and L2 learners are due to the inaccessibility of UG. This is because in GenSLA child L2 learners under the age of 7 to 8 are hypothesized to have access to UG. Thus, if the developmental paths of child and adult L2 learners overlap significantly it is likely that the basis of difference is the shared experience they have with their L1. If, however, if they follow different developmental paths this would seem to support the claim that adult L2 learners do not have access to UG; their learning must instead be due to other factors. Finally, in multilingual acquisition, if it were shown that adult L2 learners can transfer POS properties only available from their L2 to their L3 or L4 etc. this could also be used to cast doubt on the CP hypothesize.In addition, there has been a movement towards examining children's L2 acquisition. The study of child SLA is argued to be an important way of examining both child L1 acquisition and adult L2 acquisition. Unlike adults, children acquiring an L2 are considered to have full and direct access to Universal Grammar, and are typically more successful at retainment of the L2 and reaching a state of fluency. Some scholars have argued that examining child L2 acquisition is an essential tool in solving the debate over adult access to UG Most recent work on child L2 acquisition within generative framework has focused on the following 3 major issues: L1 influence in child L2 acquisition, The availability of functional categories (emphasis on the acquisition of tense-agreement and tense-aspect), Morphological variability. Access Theories: No Access Theories of no access argue that adult second language learners do not have access to UG. One source of evidence for this position stems from research observations made in the 1970s and 80s that children experience a critical period or reduced ability over time to acquire a functional L1 morphosyntactic system that ends around puberty. L2 acquisition, however, does not share this similarity with late L1 acquisition, L2 learners being generally more successful than the latter. Additionally, child L2 and adult L2 learners differ greatly in the developmental paths they take and their ultimate attainment.The Fundamental Difference Hypothesis refers to how linguistic methods of language acquisition applied in early childhood are not available for adult learners, which points to a fundamental difference in access to UG between child and adult learners. Adult L2 acquisition resembles the process of general adult learning in fields where there is no domain-specific learning system believed to exist. Access Theories: Direct Access Theories of direct access argue that UG is still directly accessible to adult second language learners, in addition to syntactic property transfer from their L1. Evidence for this position stems from research observations that although child L1 and adult L2 grammars differ, adult L2 grammars do exhibit evidence of POS properties that cannot be linked to transfer from their native language or learning. For example, adult L2 learners show knowledge of parameter settings other than those of their first language. In direct access theories the differences between adult and children must subsequently be explained on the basis of something other than UG accessibility. Many propose that it is in fact this difference between the L1 initial state and L2 initial state that accounts for the differences, when comparing child SLA learners and adult SLA learners. Advocates of this position also frequently tried to show that learners are stuck within principles and parameter settings exemplified in their L1.Some experts have commented that theories of direct access could also be characterized as direct access since the learner is not restricted only to UG principles and parameter settings of the L1 grammar due to the resetting and restructuring that occurs with the learning of the L2.Some relevant theories that assume access to UG in adulthood and propose other factors as cause of differences between L1 and L2 acquisition: Missing Surface Inflection Hypothesis, Feature Reassembly Hypothesis, Prosodic Transfer Hypothesis, Interface Hypothesis. Access Theories: Indirect Access The indirect access viewpoint considers the possibility that access to a second language grammar is first through a first language, where the second language then causes a resetting and restructuring of the learners understanding of grammar once they have been exposed to the second language. Partial Access Theories of partial access argue that L2 learners have partial, but not full access to UG through their L1. Generative SLA in the Classroom: Teaching ESL Scholars in Generative SLA have suggested that their research is relevant in developing effective methods of teaching a second language in classroom settings including bilingual, immersion, second dialect education and second language literacy programs. Practical GenSLA researchers seek to go beyond "passive" acquisition and utilize theories in SLA to efficiently teach L2s. The practical application of GenSLA is based on what is necessary or unnecessary to teach based on UG access. For example, it is generally accepted that prepositional modifiers can be accessed in UG, and because of this it may not be efficient to teach them explicitly. However, other grammatical issues such as topic and focus structures are not innate, and therefore second language learners benefit from explicit teaching. Additionally, GenSLA research can be used with relation to topics processing, practice and orthography, and be extended beyond mere production. Research on practical, educational usage of GenSLA theories have been explored in L2s such as Spanish, English, German and French. Generative SLA in the Classroom: Aiding Populations with Special Language Learning Needs It has been suggested that GenSLA research could be used to aid populations with special language learning needs, for example, it might be used to develop language intervention programs using methods similar to those implemented in second language teaching to help children with down syndrome or Alzheimers patients. Insights from GenSLA could also help multilingual children by ensuring educators not confuse problems of second language acquisition with learning disabilities, bilinguals undergoing primary language loss or deaf and hearing children learning sign language as a first or second language. Applications: Word Order Acquisition: There have been debates regarding how one can apply the principles of Generative L2 Acquisition to individuals acquiring a second language with a different word order from their L1 (for example individuals whose L1 was SOV and are now learning a SVO language, or vice versa). Some researchers have hypothesized that on the basis of the full transfer full access theory, individuals will use L1 grammar and parameter setting initially during their acquisition of L2, but would still have access to the UG. This notion contains features of both the direct and indirect theories of UG, which involves some form of access to the UG. However, research has shown that not all individuals acquiring the L2 will produce transfers from their L1, as the transfer process depends on the structural components of the L1. Instead, some linguists have argued that the process of second language acquisition can be accounted for by general learning principles and in fact not does correspond to having access to the UG. Therefore, this particular issue of different word-order acquisition can be used to call into question if the direct access theory of UG is relevant to second language acquisition, or if a no access theory is more plausible. Criticism: There has been some criticism regarding Generative L2 Acquisition on the basis of methodology and other linguistic theories. Criticism: Methodological Issues There have been claims that there are several methodological issues in generative research. The subjects need to have a requisite level of the L2 to see if a principle is operating in their interlanguage grammar. Furthermore, complex structures are often needed to test for interlanguage grammar, and the speakers need to be able to competently engage with the structures within their current L2 capacity. It is also difficult to rule out the influence of the L1, if the languages present similar principles that are in question. One of the most controversial methodological issues in generative second language acquisition is regarding what L2 data is collected. There is a need to obtain information about competence rather than performance, and it is difficult to obtain samples which contain the complex structures necessary to observe UG-related parameters and principles. Elicited data is preferred, but still problematic based on the skill level of the speaker, and is not considered naturally occurring speech. Criticism: The Minimal Tree Hypothesis The Minimal Trees Hypothesis (MTH) is a highly debated hypothesis which is concerned with the distinction between Functional Categories and Lexical Categories during language transfer. Based on a study of adult SLA learners of German, Korean and Turkish, this hypothesis asserts that only lexical categories transfer from the L1, and functional categories develop over time. This development has also been termed "organic grammar", in which the development of functional categories develop from Verb Phrase (VP)→ Inflection Phrase (IP)→ Complementizer Phrase (CP). The phases have been termed the "Bare VP Stage", the "Underspecified VP Stage" and the "Agr-P Stage" The controversy surrounding the MTH has to do with methodological problems and theoretical problems which emerge in the hypothesis. With regard to the methodological problems, the MTH has issue pertaining to performance vs. competence in data collection. The theoretical problems which exist in the hypothesis relate to the role of input, the transfer of lexical categories, and the development of previous linguistic theories and research regarding L2 Acquisition research. The theoretical basis of the MTH has been contested by many researchers, which call into question the validity of the hypothesis. Vainikka & Young-Scholten themselves, the originators of the hypothesis, acknowledge that their theory is more "radical" than what is often seen in generative SLA academia. Despite the controversial nature of the hypothesis, MTH has been considered an extremely strong and valuable contribution to SLA research and generative grammar as a whole. Criticism: The Logical Problem of Acquisition Some researchers deny the existence of any domain specific linguistic knowledge. They contest the existence of the logical problem of acquisition and the existence of UG hypothesized to fill the alleged explanatory gap. If this is true it would throw the generative approach to SLA into question. Supporters of GenSLA argue, however, that in order to disprove the logical problem of acquisition detractors would have to either show there are no instances of poverty of stimulus properties or when input alone is insufficient, one needs to explain the child's resulting competence in virtue of the operation of domain general cognitive mechanisms, statistical learning or processing considerations. They subsequently point to the fact that this has not yet been attempted exhaustively and no parsimonious alternatives have been offered to explain how poverty of stimulus properties are acquired. The logical problem of language acquisition is thought to prevail so long as there are any poverty of stimulus properties that cannot otherwise be accounted for. Beyond L2: The recent decades have witnessed an expansion of L2 field to L3 acquisition. The main researchers and publishing venues remained the same while the number of models and hypotheses regarding multilingual language acquisition have soared. Nearly all of them are nested in generative linguistics. There are three main groups of models: partial transfer, wholesale transfer, and those rejecting transfer. The key question on which the models differ is what the source of linguistic transfer is, and what the role of previous languages are.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flywrench** Flywrench: Flywrench is an action video game developed and published by Messhof. The game puts the player in the role of 6802, a spacecraft floating through the Solar System in search of a mysterious access point. As 6802 passes the different planets (including Pluto) towards the Sun, the player is tasked to maneuver 6802 through a variety of levels, in which they, by pressing or holding one of two certain buttons, must change 6802's color adaptively to barriers blocking the way to the finish and at the same time control 6802's movement behavior. Development: The game was created by Mark Essen under the pseudonym Messhof. After three previous incarnations since, of which one, also titled Flywrench, was released in 2007, Essen launched a Kickstarter campaign to expand upon his 2007-released game, seeking US$5,000 from September 20, 2009. The campaign concluded on November 1, 2009, with a total of US$5,070 funded by 29 backers. Essen acquired his master's degree at the University of California in 2010, and continued to publish free games under the Messhof banner, partially in collaboration with Adult Swim Games, and, together with Kristy Norindr, incorporated Messhof LLC in 2013, leading up to the release of Nidhogg (2014), but did not communicate about the development or fate of Flywrench, up until releasing a new teaser to accompany his Independent Games Festival entry on October 22, 2014, and then until a full trailer on July 29, 2015. Release: Flywrench was released for Microsoft Windows and OS X through Steam on August 24, 2015, and a port for PlayStation 4 was released through PlayStation Network on February 14, 2017. The game received "generally favorable" reviews. Flywrench's soundtrack, comprising original tracks by Mark Redito, Baths, Kuh Lida, Danny Scrilla, Om Unit, Knife City, Dntel, Reso, Syndakit, Daedelus, Sweatson Klank, and Goodnight Cody, was released by Magical Properties on November 13, 2015. 6802, under the name "Flywrench", also became a playable character in Super Meat Boy (2010).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whewellite** Whewellite: Whewellite is a mineral, hydrated calcium oxalate, formula Ca C2O4·H2O. Because of its organic content it is thought to have an indirect biological origin; this hypothesis is supported by its presence in coal and sedimentary nodules. However, it has also been found in hydrothermal deposits where a biological source appears improbable. For this reason, it may be classed as a true mineral. Whewellite: Whewellite, or at least crystalline calcium oxalate, does also arise from biological sources. Small crystals or flakes of it are sometimes found on the surfaces of some cacti, and kidney stones frequently have the same composition. Whewellite was named after William Whewell (1794–1866), an English polymath, naturalist and scientist, professor of moral philosophy at Cambridge and inventor of the system of crystallographic indexing. Heat decomposition: Whewellite is used as a thermogravimetric analysis standard due to its well-known decomposition temperatures and products.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bottom water** Bottom water: Bottom water is the lowermost water mass in a water body, by its bottom, with distinct characteristics, in terms of physics, chemistry, and ecology. Oceanography: Bottom water consists of cold, dense water near the ocean floor. This water is characterized by low salinity and nutrient content. Generally, low salinity from seasonal ice melt and freshwater river output characterizes bottom water produced in the Antarctic. However, during colder months, the formation of sea ice is a crucial process that raises the salinity of bottom water through brine rejection. As saltwater freezes, salt is expelled from the ice into the surrounding water. The oxygen content in bottom water is high due to ocean circulation. In the Antarctic, salty and cold surface water sinks to lower depths due to its high density. As the surface water sinks, it carries oxygen from the surface with it and will spend an enormous amount of time circulating across the seafloor of ocean basins. Oxygen-rich water moving throughout the bottom layer of the ocean is an important source for the respiration of benthic organisms. Bottom waters flow very slowly, driven mainly by slope topography and differences in temperature and salinity, especially compared to wind-driven surface ocean currents.Antarctic Bottom Water is the most dominant source of bottom water in southern parts of the Pacific Ocean, Indian Ocean, and North Atlantic Ocean. Antarctic Bottom Water sits underneath the North Atlantic Deep Water due to its colder temperature and higher density. Salinity can be used to compare the movement between fresh Antarctic Bottom Water (roughly 34.7 psu) and saltier North Atlantic Deep Water. Antarctic Bottom Water can be distinguished from other intermediate and deep water masses by its cold, low nutrient, high oxygen, and low salinity content.The bottom water of the Arctic Ocean is more isolated, due to the topography of the Arctic Ocean floor and the surrounding Arctic shelves. Deep Western Boundary Currents carry the Antarctic Bottom Water northward in the South Atlantic Ocean. The Antarctic Bottom Water shifts east when it reaches the equator, thus turning it into an eastern boundary current along the mid-Atlantic Ridge. The movement of the Antarctic Bottom Water across isopycnals is limited by deep sills. Sills are shallow seafloor regions that stop water from flowing across basins.Climate Change and Antarctic Bottom Water Changes in the characterization of Antarctic Bottom Water have been monitored in the Southern Ocean. The Antarctic Bottom Water’s temperature has increased and the salinity continues to freshen. Since the water mass is heating up and getting fresher, the density is significantly lowering. This has to do with Global Warming heating up the atmosphere and the ocean resulting in sea ice melt, sea level rise, and ocean acidification. Ventilation has also slowed down as a result of global warming. Antarctic Bottom Water has such high oxygen content that it is able to contribute to the ventilation of the deep ocean by acting as a circulatory system. Long-term shifts in temperature increase have slowed the rate of ocean ventilation. As the atmosphere warms, that decreases the formation of sea ice in Antarctica, thus decreasing the density of the surrounding water. The decreased density leads to a slower rate of convection ultimately slowing down deep water formation processes. Essential processes like upwelling begin to digress. Without upwelling, cold, nutrient-rich water can’t be recycled to the surface to create areas of high productivity. Estuaries: Bottom water by an estuary of a river discharging into a saline body exhibits peculiar transport of mud. Due to fresh/saline water intermixing by the estuary, a horizontal isohale gradient is created, with lower salinity levels upstream, which generates the upstream flow of the bottom water. Mud particles carried by river begin settling down as the current and turbulence decrease. When the particles nearly reach the floor, they are carried back to the head of estuary to accumulate at the point where the salinity of the surface and bottom waters become comparable and the bottom flow decreases. This process results is a distinguished pile of mud at this point. Lake hydrography: Bottom water of lakes may feature lower level of oxygen, to the point of completely vanished dissolved oxygen (i.e., becoming anaerobic), and higher levels of chlorinity and organic-induced acidity. In many lakes, especially in the zones of continental climate, summer heating and winter cooling create strong vertical temperature gradients which oppose water intermixing, resulting in the periods of summer and winter thermal lake stratification. They are intervened by bottom water overturning, which happens in autumn (autumn overturn) and in spring (spring overturn) due to equalizing of temperature gradients and the resulting easier intermixing by wind and other sources of turbulence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ramsey interferometry** Ramsey interferometry: Ramsey interferometry, also known as the separated oscillating fields method, is a form of particle interferometry that uses the phenomenon of magnetic resonance to measure transition frequencies of particles. It was developed in 1949 by Norman Ramsey, who built upon the ideas of his mentor, Isidor Isaac Rabi, who initially developed a technique for measuring particle transition frequencies. Ramsey's method is used today in atomic clocks and in the S.I. definition of the second. Most precision atomic measurements, such as modern atom interferometers and quantum logic gates, have a Ramsey-type configuration. A more modern method, known as Ramsey–Bordé interferometry uses a Ramsey configuration and was developed by French physicist Christian Bordé and is known as the Ramsey–Bordé interferometer. Bordé's main idea was to use atomic recoil to create a beam splitter of different geometries for an atom-wave. The Ramsey–Bordé interferometer specifically uses two pairs of counter-propagating interaction waves, and another method named the "photon-echo" uses two co-propagating pairs of interaction waves. Introduction: A main goal of precision spectroscopy of a two-level atom is to measure the absorption frequency ω0 between the ground state |↓⟩ and excited state |↑⟩ of the atom. One way to accomplish this measurement is to apply an external oscillating electromagnetic field at frequency ω and then find the difference Δ (also known as the detuning) between ω and ω0 (Δ=ω−ω0) by measuring the probability to transfer |↓⟩ to |↑⟩ . This probability can be maximized when Δ=0 , when the driving field is on resonance with the transition frequency of the atom. Looking at this probability of transition as a function of the detuning P(Δ) , the narrower the peak around Δ=0 the more precision there is. If the peak were very broad about Δ=0 then it would be difficult to distinguish precisely where Δ=0 is located due to many values of Δ having close to the same probability. Physical principles: The Rabi method A simplified version of the Rabi method consists of a beam of atoms, all having the same speed v and the same direction, sent through one interaction zone of length L . The atoms are two-level atoms with a transition energy of ℏω0 (this is defined by applying a field B‖ in an excitation direction z^ , and thus ω0=γ|B‖| , the Larmor frequency), and with an interaction time of τ=L/v in the interaction zone. In the interaction zone, a monochromatic oscillating magnetic field labeled cos ⁡(ωt) is applied perpendicular to the excitation direction, and this will lead to Rabi oscillations between |↓⟩ and |↑⟩ at a frequency of Ω⊥=γ|B⊥| .The Hamiltonian in the rotating frame (including the rotating wave approximation) is: H^=−ℏΔ2σz^+ℏΩ⊥2σx^. Physical principles: The probability of transition from |↓⟩ and |↑⟩ can be found from this Hamiltonian and is sin 2⁡(L2vΩ⊥2+Δ2). Physical principles: This probability will be at its maximum when Ω⊥τ=π . The line width δ of this P(Δ,Ω⊥) vs. {\textstyle {\frac {\Delta }{\Omega _{\perp }}}} determines the precision of the measurement. Because {\textstyle \delta \sim \Omega _{\perp }\sim {\frac {\pi }{\tau }}\sim {\frac {\pi v}{L}}} , by increasing τ , or L , and correspondingly decreasing Ω⊥ so that their product is π , the precision of the measurement increases; i.e. the peak of the graph becomes narrower. Physical principles: In reality, however, inhomogeneities such as the atoms having a distribution of velocities or there being an inhomogeneous B⊥ will cause the line shape to broaden and lead to decreased precision. Having a distribution of velocities means having a distribution of interaction times, and therefore there would be many angles through which state vectors would flip on the Bloch sphere. There would be an optimal length in the Rabi setup that would give the greatest precision, but it would not be possible to increase the length L ad infinitum and expect ever increasing precision, as was the case in the perfect, simple Rabi model. Physical principles: The Ramsey method Ramsey improved upon Rabi's method by splitting the one interaction zone into two very short interaction zones, each applying a {\textstyle {\frac {\pi }{2}}} pulse. The two interaction zones are separated by a much longer non-interaction zone. By making the two interaction zones very short, the atoms spend a much shorter time in the presence of the external electromagnetic fields than they would in the Rabi model. This is advantageous because the longer the atoms are in the interaction zone, the more inhomogeneities (such as an inhomogeneous field) lead to reduced precision in determining Δ . The non-interaction zone in Ramsey's model can be made much longer than the one interaction zone in Rabi's method because there is no perpendicular field B⊥ being applied in the non-interaction zone (although there is still B‖ ). Physical principles: The primary improvement from the Ramsey method is because the main peak resonance frequency represents an AVERAGE over the frequencies (and inhomogeneities) in the non-interaction region between the cavities, whereas with the Rabi method the inhomogeneities in the interaction region lead to line-broadening. An additional advantage of the Ramsey method for microwave or optical transitions is that the non-interaction region can be made much longer than an interaction region with the Rabi method resulting in narrower linewidths.The Hamiltonian in the rotating frame for the two interaction zones is the same for that of the Rabi method, and in the non-interaction zone the Hamiltonian is only the σz^ term. First a {\textstyle {\frac {\pi }{2}}} pulse is applied to atoms in the ground state, whereupon the atoms reach the non-interaction zone and the spins precess about the z-axis for time T . Another {\textstyle {\frac {\pi }{2}}} pulse is applied and the probability measured—practically this experiment must be done many times, because one measurement will not be enough to determine the probability of measuring any value. (See the Bloch Sphere description below). By applying this evolution to atoms of one velocity, the probability to find the atom in the excited state as a function of the detuning and time of flight T in the non-interaction zone is (taking |Δ|≪Ω⊥ here) cos cos 2⁡(ΔL2v). Physical principles: This probability function describes the well-known Ramsey fringes. Physical principles: If there is a distribution of velocities and a "hard pulse" (|Δ|≪Ω⊥) is applied in the interaction zones so that all of the spins of the atoms are rotated {\textstyle {\frac {\pi }{2}}} on the Bloch sphere regardless of whether or not they all were excited to exactly the same resonance frequency, the Ramsey fringes will look very similar to those discussed above. If a hard pulse is not applied, then the variation in interaction times must be taken into account. What results are Ramsey fringes in an envelope in the shape of the Rabi method probability for atoms of one velocity. The line width δ of the fringes in this case is what determines the precision with which Δ can be determined and is δ∼1T∼vL. Physical principles: By increasing the time of flight in the non-interaction zone, T , or equivalently increasing the length L of the non-interaction zone, the line width can be substantially improved, by a factor of 10 or more, over that of other methods.Because Ramsey's model allows for a longer observation time, one can more precisely determine ω0 . This is a statement of the time-energy uncertainty principle: the larger the uncertainty in the time domain, the smaller the uncertainty in the Energy domain, or equivalently the frequency domain. Thought of another way, if two waves of almost exactly the same frequency are superimposed upon each other, then it will be impossible to distinguish them if the resolution of our eyes is larger than the difference between the two waves. Only after a long period of time will the difference between two waves become large enough to differentiate the two.Early Ramsey interferometers used two interaction zones separated in space, but it is also possible to use two pulses separated in time, as long as the pulses are coherent. In the case of time-separated pulses, the longer the time between pulses, the more precise the measurement. Applications of the Ramsey interferometer: Atomic clocks and the SI definition of the second An atomic clock is fundamentally an oscillator whose frequency ω is matched to that of an atomic transition of a two-level atom, ω0 . The oscillator is the parallel external electromagnetic field in the non-interaction zone of the Ramsey–Bordé interferometer. By measuring the transition rate from the excited to the ground state, one can tune the oscillator so that ω=ω0 by finding the frequency that yields the maximum transition rate. Once the oscillator is tuned, the number of oscillations of the oscillator can be counted electronically to give a certain time interval (e.g. the SI second, which is 9,192,631,770 periods of a cesium-133 atom). Applications of the Ramsey interferometer: Experiments of Serge Haroche Serge Haroche won the 2012 Nobel Prize in physics (with David J. Wineland) for work involving cavity quantum electrodynamics (QED) in which the research group used microwave-frequency photons to verify the quantum description of electromagnetic fields. Essential to their experiments was the Ramsey interferometer, which they used to demonstrate the transfer of quantum coherence from one atom to another through interaction with a quantum mode in a cavity. The setup is similar to a regular Ramsey interferometer, with key differences being there is a quantum cavity in the non-interaction zone and the second interaction zone has its field phase shifted by some constant relative to the first interaction zone. Applications of the Ramsey interferometer: If one atom is sent into the setup in its ground state |↓⟩ and passed through the first interaction zone, the state would become a superposition of ground and excited states {\textstyle {\frac {1}{\sqrt {2}}}\left(\left|\downarrow \right\rangle +\left|\uparrow \right\rangle \right)} , just as it would with a regular Ramsey interferometer. It then passes through the quantum cavity, which initially contains only a vacuum, and then is measured to be |↓⟩ or |↑⟩ . A second atom initially in |↓⟩ is then sent through the cavity and then through the phase-shifted second Ramsey interaction zone. If the first atom is measured to be in |↓⟩ , then the probability that the second atom is in |↑⟩ depends on the amount of time between sending in the first and the second atoms. The fundamental reason for this is that if the first atom is measured to be in |↓⟩ , then there is a single mode of the electromagnetic field within the cavity that will subsequently affect the measurement outcome of the second atom. The Ramsey–Bordé interferometer: Early interpretations of atom interferometers, including those of Ramsey, used a classical description of the motion of the atoms, but Bordé introduced an interpretation that used a quantum description of the motion of the atoms. Strictly speaking, the Ramsey interferometer is not an interferometer in real space because the fringe patterns develop due to changes of the pseudo-spin of the atom in the internal atomic space. However, an argument could be made for the Ramsey interferometer to be an interferometer in real space by thinking about the atomic movement quantumly—the fringes can be thought of as the result of the momentum kick imparted to the atoms by the detuning Δ The four traveling-wave interaction geometry The problem that Bordé et al. were trying to solve in 1984 was the averaging-out of Ramsey fringes of atoms whose transition frequencies were in the optical range. When this was the case, first-order Doppler shifts caused the Ramsey fringes to vanish because of the introduced spread in frequencies. Their solution was to have four Ramsey interaction zones instead of two, each zone consisting of a traveling wave but still applying a π/2 pulse. The first two waves both travel in the same direction, and the second two both travel in the direction opposite that of the first and second. There are two populations that result from the interaction of the atoms first with the first two zones and subsequently with the second two. The first population consists of atoms whose Doppler-induced de-phasing has cancelled, resulting in the familiar Ramsey fringes. The second consists of atoms whose Doppler-induced de-phasing has doubled and whose Ramsey fringes have completely disappeared (this is known as the "backward-stimulated photon echo", and its signal goes to zero after integrating over all velocities). The Ramsey–Bordé interferometer: The interaction geometry of two pairs of counter-propagating waves that Bordé et al. introduced allows improved resolution of spectroscopy of frequencies in the optical range, such as those of Ca and I2. The Ramsey–Bordé interferometer: The interferometer Specifically, however, the Ramsey–Bordé interferometer is an atom interferometer that uses this four-traveling-wave geometry and the phenomenon of atomic recoil. In Bordé's notation, |a⟩ is the ground state and |b⟩ is the excited state. When an atom enters any of the four interaction zones, the wavefunction of the atom is divided into a superposition of two states, where each state is described by a specific energy and a specific momentum: |α,mα⟩, where α is either a or b. The quantum number mα is the number of light momentum quanta ℏ|k| that have been exchanged from the initial momentum, where k is the wavevector of the laser. This superposition is due to the energy and momentum exchanged between the laser and the atom in the interaction zones during the absorption/emission processes. Because there is initially one atom-wave, after the atom has passed through three zones it is in a superposition of eight different states before it reaches the final interaction zone. The Ramsey–Bordé interferometer: Looking at the probability to transition to |b⟩ after the atom has passed through the fourth interaction zone, one would find dependence on the detuning in the form of Ramsey fringes, but due to the difference in two quantum mechanical paths. After integrating over all velocities, there are only two closed circuit quantum mechanical paths that do not integrate to zero, and those are the |a, 0⟩ and |b, –1⟩ path and the |a, 2⟩ and |b, 1⟩ path, which are the two paths that lead to intersections of the diagram at the fourth interaction zone. The atom-wave interferometer formed by either of these two paths leads to a phase difference that is dependent on both internal and external parameters, i.e. it is dependent on the physical distances by which the interaction zones are separated and on the internal state of the atom, as well as external applied fields. Another way to think about these interferometers in the traditional sense is that for each path there are two arms, each of which is denoted by the atomic state. The Ramsey–Bordé interferometer: If an external field is applied to either rotate or accelerate the atoms, there will be a phase shift due to the induced de Broglie phase in each arm of the interferometer, and this will translate to a shift in the Ramsey fringes. In other words, the external field will change the momentum states, which will lead to a shift in the fringe pattern, which can be detected. As an example, apply the following Hamiltonian of an external field to rotate the atoms in the interferometer: H^R=−Ω⋅(r^×p^). The Ramsey–Bordé interferometer: This Hamiltonian leads to a time evolution operator to first order in Ω exp ⁡(iℏ∫dt′[Ω×r^(t′)]⋅[p0+mαℏk]). The Ramsey–Bordé interferometer: If Ω is perpendicular to r^(t′) , then the round trip phase factor for one oscillation is given by exp ⁡(2ikΩd2/v) , where d is the length of the entire apparatus from the first interaction zone to the final interaction zone. This will yield a probability such that cos ⁡[(Δ+2πΩdλ+ϕ)2dv], where λ is the wavelength of the atomic two-level transition. This probability represents a shift from ω0 by a factor of δv=Ωdλ. The Ramsey–Bordé interferometer: For a calcium atom on the Earth's surface that rotates at 12 hours {\textstyle \Omega =\pi /12{\text{ hours}}} , using 21 cm and looking at the 657.3 nm transition, the shift in the fringes would be 12 Hz , which is a measurable effect. A similar effect can be calculated for the shift in the Ramsey fringes caused by the acceleration of gravity. The shifts in the fringes will reverse direction if the directions of the lasers in the interaction zones are reversed, and the shift will cancel if standing waves are used. The Ramsey–Bordé interferometer provides the potential for improved frequency measurements in the presence of external fields or rotations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meckel's cartilage** Meckel's cartilage: In humans, the cartilaginous bar of the mandibular arch is formed by what are known as Meckel's cartilages (right and left) also known as Meckelian cartilages; above this the incus and malleus are developed. Meckel's cartilage arises from the first pharyngeal arch. The dorsal end of each cartilage is connected with the ear-capsule and is ossified to form the malleus; the ventral ends meet each other in the region of the symphysis menti, and are usually regarded as undergoing ossification to form that portion of the mandible which contains the incisor teeth. The intervening part of the cartilage disappears; the portion immediately adjacent to the malleus is replaced by fibrous membrane, which constitutes the sphenomandibular ligament, while from the connective tissue covering the remainder of the cartilage the greater part of the mandible is ossified. Johann Friedrich Meckel, the Younger discovered this cartilage in 1820. Evolution: Meckel's cartilage is a piece of cartilage from which the mandibles (lower jaws) of vertebrates evolved. Originally it was the lower of two cartilages which supported the first branchial arch in early fish. Then it grew longer and stronger, and acquired muscles capable of closing the developing jaw.In early fish and in chondrichthyans (cartilaginous fish such as sharks), the Meckelian Cartilage continued to be the main component of the lower jaw. But in the adult forms of osteichthyans (bony fish) and their descendants (amphibians, reptiles, birds, and mammals), the cartilage is covered in bone – although in their embryos the jaw initially develops as the Meckelian Cartilage. In all tetrapods the cartilage partially ossifies (changes to bone) at the rear end of the jaw and becomes the articular bone, which forms part of the jaw joint in all tetrapods except mammals.In some extinct mammal groups like eutriconodonts, the Meckel's cartilage still connected otherwise entirely modern ear bones to the jaw.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parasite-stress theory** Parasite-stress theory: Parasite-stress theory, illustrated by researchers Corey Fincher and Randy Thornhill, is a theory of human evolution proposing that parasites and diseases encountered by a species shape the development of species' values and qualities. The differences in how parasites and diseases stress people's development is what leads to differences in their biological mate value and mate preferences, as well as differences across culture. Parasites causing diseases pose potential ecological hazards and, subsequently, selection pressures can alter psychological and social behaviours of humans, as well as have an influence on their immune systems. Theories of parasite-mediated mate choice: Several hypotheses have attempted to explain how parasite load influences female mate choice, as certain traits are thought to be costly and the expression of such traits may be indicative of genetic quality. Theories of parasite-mediated mate choice: Hamilton–Zuk hypothesis According to the Hamilton–Zuk hypothesis, female mate choice is based on the extent to which male secondary sexual characteristics are expressed, as these are thought to be indicative of a heritable resistance to pathogens. A meta-analysis reviewed studies exploring the magnitude of the relationship between expression of secondary sexual characteristics and parasite intensity, as well as level of host immune functioning. Consistent with the hypothesis proposed by Hamilton and Zuk, the meta-analysis revealed that males with the fewest parasites and/or the strongest immune systems typically had the most extravagant secondary sexual characteristics. With regards to parasite-stress theory, these findings would be interpreted as those men who have encountered more parasites – or are naturally less capable of dealing with parasites – are also less desirable mates to females, due to a lower genetic quality for the potential offspring. Theories of parasite-mediated mate choice: The Zahavi handicap principle The Zahavi handicap principle, originally proposed by Zahavi in 1975, suggests that males who possess secondary sexual characteristics which provide a handicap are more attractive to females. These sexual ornaments are sexually selected in order to appear stronger and better adapted, compared to other males in the environment. This is because these characteristics are indicators of good genes and heritable viability, as they are costly to an individual's survival to maintain and produce. Therefore, the stronger the individual is, the more able they are to bear this cost.These kinds of characteristics are a form of communication within species, as they are defined as honest signals (a signal about a mate's quality which cannot be faked). As a weak individual would not be able to survive with this particular characteristic, it signals to potential mates that it is stronger than its competitors and has a high mate value. Examples of such traits include the peacock's tail, very bright in nature and hence attracting more attention from predators as well as requiring more energy to maintain. Another example is the gazelle's stotting behaviour, whereby the gazelle jumps up and down when it spots a predator, in order to indicate its physical fitness. Theories of parasite-mediated mate choice: Immunocompetence handicap hypothesis This hypothesis takes Zahavi's principle further in suggesting that testosterone is responsible for the production of male secondary sexual traits while also suppressing the immune system. It therefore proposes that these traits are honest signals of mate quality because only males with 'good genes' should be able to fully express them without being vulnerable to parasite attack. Males will, therefore, demonstrate their high genetic quality by developing more attractive honest signals in substitute for their immune system's strength. These honest signals require testosterone, which simultaneously suppresses the immune system.A meta-analysis revealed that evidence for a direct effect of testosterone on the expression of sexual traits and the suppression of immunocompetence was weak. It was found, however, that increased testosterone influenced parasite loads, indicating an indirect role of the hormone in immune function. Interactions with developmental instability: Developmental instability is the inability of an organism to produce its optimal phenotype, due to genetic limitations and environmental stresses (such as parasite load). Fluctuating asymmetry Fluctuating asymmetry is the extent to which an organism deviates from perfect body symmetry. Asymmetry, an indicator of development, is exhibited by all organisms and is thus considered by scientists to be a reliable measure of developmental instability. Research in a Dominican village, which measured the prevalence of protozoa and worm parasites in over 300 children, found a positive correlation between gut parasites and fluctuating asymmetry. This finding is indicative of how parasites negatively impact peoples' development and act as environmental stress factors. Interactions with developmental instability: A literature review summarising more than 100 different studies in the field found that, among other variables, immunocompetence (the ability of an organism to produce a normal immune response to an antigen) had a significant relationship with fluctuating asymmetry. In other words, individuals who had a better ability to defend themselves against threats, such as parasites, were also lower in fluctuating asymmetry. Interactions with developmental instability: Waist-hip ratio Waist-hip ratio is the ratio of the circumference of the waist, to the circumference of the hips. It is calculated by dividing the waist circumference by the hip circumference.A woman's waist-hip ratio is an indicator of her age, health and fertility, as well as being a good indicator of other people's judgements of attractiveness, with a lower waist-hip ratio being optimal. All of the above are related to mate choice: a lower waist-hip ratio indicates a younger, healthier, more fertile and more subjectively attractive women, all of which are desirable qualities in a mate. Interactions with developmental instability: Higher waist-hip ratio has been linked with both mobility disability and also cardiovascular disease. Also, within parasite-stress theory itself, women with higher waist-hip ratio's also had a higher incidence of toxoplasmosis, another incidence in which parasitism contributes to developmental instability. Mate choice Mate choosers prefer mates who are lower in developmental instability, meaning that they choose those who display lower fluctuating asymmetry. In barn swallows, the length of the male's tail is used as a signal of mate quality: males with longer tails are preferred to those with shorter tails. Research has found that, in a population of barn swallows infested by the parasite Ornithonyssus bursa, male barn swallows with fewer mites also had longer tails. Interactions with developmental instability: Parasite-mediated domestication According to the hypothesis proposed by Skok, parasites (specifically endoparasites: helminths and protozoa) could play an important mediating role in the process of domestication, with a 'parasite effect' primarily involved in the emergence of the domesticated state (proto-domestication). The hypothesis states that parasites indirectly influence literally all of the main processes that otherwise underlie the domestication syndrome (abnormalities in the functioning of the neuro-neuroendocrine system, a developmental disruption of neural crest cell input to the affected phenotypic traits, etc.). The hypothesis predicts that the frequency of domestication syndrome traits such as tameness, depigmentation and mottling, floppy ears, short and curled tail, reduced size of the adrenal glands, etc. in the (wild) population increases with decreasing genetic resistance to parasites and/or with increasing parasite load. The hypothesis further suggests that the features of the domestication syndrome may be genetically linked to genes related to resistance/tolerance to parasites, the role of miRNA in the process of epigenetic inheritance or the transgenerational inheritance of stress pathology. Variations across cultures: When discussing cross-cultural differences between societies, scientists will more often than not make a distinction between individualism and collectivism. Consequently, it is important to provide an understanding for the variations exhibited between these two cultures. Collectivist Research has suggested that collectivism exists to defend against infectious diseases. Therefore, cultures that have a higher rate of infections will be more likely to become collectivist. This has been based on a number of observations. Variations across cultures: Firstly, collectivists place a lot of emphasis on their in-group, caring for one another and hence protecting each other from the negative effects of contagion. This is likely due to the fact that one's immune system works to defend the body from local parasites; however, this still allows for the risk of unfamiliar infections resulting in illness as the immune system has not been able to evolve in response to these novel parasites. Hence, ensuring that those in the in-group are not affected by a novel disease will subsequently result in a reduced risk of encountering a novel parasite from an exposed person an individual remains in close proximity with. Variations across cultures: Secondly, collectivist cultures are untrusting of those outside of their in-group, which may serve as a protective behaviour against interactions with those in groups that may harbour novel diseases. In similar vein to the explanation presented with one's protective nature of their in-group members, one's immune system is well adapted to local parasites and will be unable to effectively protect against unfamiliar pathogens. Therefore, avoidance of those outside of one's inner circle will aid in the prevention of being exposed to novel and dangerous pathogens that the immune system is unable to defend against. Variations across cultures: Thirdly, it has been observed that collectivist groups exhibit strong negative attitudes when an individual goes against their social norms. A relevant example is deviating from the way that food is prepared, which could result in a higher possibility of exposure to new and threatening pathogens. Hence, this strong social norm, is effectively in place to prevent group members from being negligent and becoming ill with a novel parasite – which then could pass onto other members of the group. Variations across cultures: Individualist Individualist societies, however, are very different to collectivist through their promotion of looking out for oneself, rather than worrying about the needs of the group. This is partly due to these cultures being predominantly in geographical locations which are under a lot less danger from parasite invasions. Unlike collectivists, individualists make much less of a distinction between in-groups and out-groups. A clear distinction, that individualism shows from collectivism, comes from the active encouragement individualist cultures place upon individuals straying from the current social norms. Criticism Several scientists have criticized the theory that pathogen stress can explain differences in collectivism versus individualism, suggesting that the observed correlations were spurious. Variations across cultures: Anthropologist Daniel Hruschka and human biologist Joseph Henrich have proposed an alternative explanation of the observed cultural differences. In colonial times, European colonizers established efficient social institutions in countries with low mortality. In places where mortality was high due to infectious diseases, they set up extractive systems with less settling of Europeans. The more-efficient government institutions inherited from colonial times in low mortality countries can explain the observed differences in cultural values. Variations across cultures: Parasite influence on food preference across cultures This difference in culture due to pathogen avoidance has also been seen in the contrast of food preferences between cultures. Research investigated the possibility that individuals will have a preference for spices in their cooking to defend against food-borne human parasites. This was tested through measuring the types and numbers of spices used in recipes across various regions across the world – it was found that temperature was a good predictor of the use of anti-pathogen spices. This finding makes sense when considering that temperature is a breeding ground for parasites. Similarly, it has also been found that there is a relationship between countries that have a preference for utilizing spices in their cooking and parasite stress.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sufentanil** Sufentanil: Sufentanil, sold under the brand names Dsuvia and Sufenta, is a synthetic opioid analgesic drug approximately 5 to 10 times as potent as its parent drug, fentanyl, and 500 times as potent as morphine. Structurally, sufentanil differs from fentanyl through the addition of a methoxymethyl group on the piperidine ring (which increases potency but is believed to reduce duration of action), and the replacement of the phenyl ring by thiophene. Sufentanil first was synthesized at Janssen Pharmaceutica in 1974.Sufentanil is marketed for use by specialist centers under different trade names, such as Sufenta and Sufentil. Sufentanil with and without lidocaine or mepivacaine is available as a transdermal patch similar to Duragesic in Europe under trade names such as Chronogesic. It is available as a sublingual tablet under the trade name Dsuvia. Medical uses: The main use of this medication is in operating suites and critical care where pain relief is required for a short period of time. It also offers properties of sedation and this makes it a good analgesic component of anesthetic regimen during an operation.Because of its extremely high potency, it is often used in surgery and post-operative pain management for patients that are heavily opioid dependent/opioid tolerant because of long term opiate use for chronic pain or illicit opiate use. Currently sufentanil is the most potent opioid painkiller available for use in humans. Although more potent narcotic pain medications do exist, all medications stronger than sufentanil are approved for veterinary use only. It is also used in surgery and post operative pain control in patients that are taking high dose buprenorphine for chronic pain because it is the only opioid that has a potency and binding affinity strong enough to displace buprenorphine from the opioid receptors in the central nervous system and provide analgesia.In 2018, the Food and Drug Administration (FDA) approved Dsuvia, a sublingual tablet form of the drug, that was developed in a collaboration between AcelRx Pharmaceuticals and the United States Department of Defense for use in battlefield settings where intravenous (IV) treatments may not be readily available. The decision to approve this new potent synthetic opioid came under criticism from politicians and from the chair of the FDA advisory committee, who fear that the tablets will be easily diverted to the illegal drug market. Side effects: It is essential for the administering medical professional to be trained in airway management with readily available airway equipment because the drug causes significant respiratory depression and may cause respiratory arrest if given too rapidly or in too high a dose. Other opioid side effects such as heart rhythm irregularity, blood pressure changes and nausea/vomiting can also be present in patients given this drug and should be dealt with accordingly. Side effects: Sufentanil has been associated with extremely rare instances of life-threatening anaphylaxis. Overdose: Management Because sufentanil is very potent, practitioners must be prepared to reverse the effects of the drug should the patient exhibit symptoms of overdose such as respiratory depression or respiratory arrest. As for all other opioid-based medications, naloxone (trade name Narcan) is the definitive antidote for overdose. Depending on the amount administered, it can reverse the respiratory depression and, if enough is administered, completely reverse the effects of sufentanil.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhetorical structure theory** Rhetorical structure theory: Rhetorical structure theory (RST) is a theory of text organization that describes relations that hold between parts of text. It was originally developed by William Mann, Sandra Thompson, Christian M.I.M. Matthiessen and others at the University of Southern California's Information Sciences Institute (ISI) and defined in a 1988 paper. The theory was developed as part of studies of computer-based text generation. Natural language researchers later began using RST in text summarization and other applications. It explains coherence by postulating a hierarchical, connected structure of texts. In 2000, Daniel Marcu, also of ISI, demonstrated that practical discourse parsing and text summarization also could be achieved using RST. Rhetorical relations: Rhetorical relations or coherence relations or discourse relations are paratactic (coordinate) or hypotactic (subordinate) relations that hold across two or more text spans. It is widely accepted that notion of coherence is through text relations like this. RST using rhetorical relations provide a systematic way for an analyst to analyse the text. An analysis is usually built by reading the text and constructing a tree using the relations. The following example is a title and summary, appearing at the top of an article in Scientific American magazine (Ramachandran and Anstis, 1986). The original text, broken into numbered units, is: [Title:] The Perception of Apparent Motion [Abstract:] When the motion of an intermittently seen object is ambiguous the visual system resolves confusion by applying some tricks that reflect a built-in knowledge of properties of the physical worldIn the figure, numbers 1,2,3,4 show the corresponding units as explained above. Rhetorical relations: The fourth unit and the third unit form a relation "Means". The third unit is the essential part of this relation, so it is called the nucleus of the relation and fourth unit is called the satellite of the relation. Similarly second unit to third and fourth unit is forming relation "Condition". All units are also spans and spans may be composed of more than one unit. Nuclearity in discourse: RST establishes two different types of units. Nuclei are considered as the most important parts of text whereas satellites contribute to the nuclei and are secondary. Nucleus contains basic information and satellite contains additional information about nucleus. The satellite is often incomprehensible without nucleus, whereas a text where a satellites have been deleted can be understood to a certain extent. Hierarchy in the analysis: RST relations are applied recursively in a text, until all units in that text are constituents in an RST relation. The result of such analyses is that RST structure are typically represented as trees, with one top level relation that encompasses other relations at lower levels. Why RST?: From linguistic point of view, RST proposes a different view of text organization than most linguistic theories. RST points to a tight relation between relations and coherence in text From a computational point of view, it provides a characterization of text relations that has been implemented in different systems and for applications as text generation and summarization. In design rationale: Computer scientists Ana Cristina Bicharra Garcia and Clarisse Sieckenius de Souz have used RST as the basis of a design rationale system called ADD+. In ADD+, RST is used as the basis for the rhetorical organization of a knowledge base, in a way comparable to other knowledge representation systems such as issue-based information system (IBIS). Similarly, RST has been used in representation schemes for argumentation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SEMA4A** SEMA4A: Semaphorin-4A is a protein that in humans is encoded by the SEMA4A gene. Function: SEMA4A is a member of the semaphorin family of soluble and transmembrane proteins. Semaphorins are involved in guidance of axonal migration during neuronal development and in immune responses.[supplied by OMIM] Clinical significance: A germline variant in SEMA4A (V78M) has been demonstrated to confer risk for colorectal cancer type X.Recently it has been identified as a novel therapeutic target in Multiple myeloma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of Food Biochemistry** Journal of Food Biochemistry: The Journal of Food Biochemistry is a peer-reviewed scientific journal that covers research on the effects of handling, storage, and processing on the biochemical aspects of food. It was established in 1977 and is published by Wiley-Blackwell. The journal moved to online-only publication in 2011.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Macintosh startup** Macintosh startup: The classic Macintosh startup sequence includes hardware tests which may trigger the startup chime, Happy Mac, Sad Mac, and Chimes of Death. On Macs running macOS Big Sur or later the startup sound is enabled by default, but can be disabled by the user within System Preferences (Big Sur or Monterey) or System Settings (Ventura). Startup chime: The Macintosh startup chime is played on power-up, before trying to boot an operating system. The sound indicates that diagnostic tests run immediately at startup have found no hardware or fundamental software problems. The specific sound differs depending on the ROM, which greatly varies depending on Macintosh model. The first sound version in the first three Macintosh models is a simple square-wave "beep", and all subsequent sounds are various chords. Startup chime: Mark Lentczner created the software that plays the arpeggiated chord in the Macintosh II. Variations of this sound were deployed until Jim Reekes created the startup chime in the Quadra 700 through the Quadra 800. Reekes said, "The startup sound was done in my home studio on a Korg Wavestation EX. It's a C major chord, played with both hands stretched out as wide as possible (with 3rd at the top, if I recall)." He created the sound as he was annoyed with the tri-tone startup chimes because they were too associated with the death chimes and the computer crashes. He recalls that Apple did not give him permission to change the sound but that he secretly snuck the sound into the computers with the help of engineers who were in charge of the ROM chips. When Apple discovered this, he refused to change it, using various claims in order to keep the new sound intact. He is also the creator of the iconic (or "earconic", as he calls it) "bong" startup chime in most Macintoshes since the Quadra 840AV. A slightly lower-pitched version of this chime is in all PCI-based Power Macs until the iMac G3. The Macintosh LC, LC II, and Macintosh Classic II do not use the Reekes chime, instead using an F major chord that just produces a "ding" sound. The first generation of Power Macintosh computers also do not use the Reekes chime, instead using a chord strummed on a Yamaha 12-string acoustic guitar by jazz guitarist Stanley Jordan. Further, the Power Macintosh 5200–6300 computers (excluding the 5400 and 5500, which have the "bong" chime like the one in the PCI-based Power Macs) use a unique chime, which is also in the television commercials for the Power Macintosh and PowerBook series from 1995 until 1998, and the 20th Anniversary Macintosh uses another unique sound. Startup chime: For models built prior to the introduction of the Power Macintosh in 1994, the failure of initial self-diagnostic tests results in a Sad Mac icon, an error code, and distinctive Chimes of Death sounds. Startup chime: The chime for all Mac computers from 1998 to early 2016 is the same chime used first in the iMac G3. The chord is a F-sharp major chord, and was produced by pitch-shifting the 840AV's sound. Since 2012, the Mac startup chime is a registered trademark in the United States, and is featured in the 2008 Pixar film WALL-E when the titular robot character is fully recharged by solar panels as well as in the 2007 Brad Paisley song "Online".Starting with the 2016 MacBook Pro, all new Macs were shipped without a startup chime, with the Macs silently booting when powered on. In 2020, the startup chime would be added to these models with the release of macOS Big Sur, which allows it to be enabled or disabled in System Preferences. On the macOS Big Sur 11.0.1 beta, it was discovered that the new lower pitched chime was brought to all older supported Macs. In a firmware update included in the macOS Catalina 2020-001 Security Update, and the macOS Mojave 2020-007 Security Update, the new startup chime in Big Sur is brought to all Big Sur-supported Macs including the unsupported 2013 iMac. Happy Mac: A Happy Mac is the normal bootup (startup) icon of an Apple Macintosh computer running older versions of the Mac operating system. It was designed by Susan Kare in the 1980s, drawing inspiration from the design of the Compact Macintosh series and from the Batman character Two-Face. The icon remained unchanged for many years until it and related icons were updated to 8-bit color. The Happy Mac indicates that booting has successfully begun, while a Sad Mac (along with a "Chimes of Death" melody or one or more beeps) indicates a hardware or software problem. Happy Mac: When a Macintosh boots into the classic Mac OS (Mac OS 9 or lower), the system will play its startup chime, the screen will turn gray, and the Happy Mac icon will appear, followed by the "Welcome to Mac OS" splash screen (or the small "Welcome to Macintosh" screen in System 7.5 and earlier), which underwent several stylistic changes, the other significant ones including a progress bar introduced in System 7.5 and extension icons appearing in the bottom left. Mac OS versions 8.6 and later also includes the version number in this splash screen (for example, "Mac OS 9" in big black text). Happy Mac: On early Macs without an internal hard drive, the computer boots up to a point where it needs to load the operating system from a floppy disk. Until the user inserts the correct disk, the Mac displays a floppy icon with a blinking question mark. In New World ROM Macs, a folder icon with a question mark that repeatedly changes to the Finder icon is shown if a System Folder or boot loader file cannot be found on the startup disk. Happy Mac: With Mac OS X 10.1, a new Happy Mac was included. This is also the last version with a Happy Mac icon; in version 10.2, the Happy Mac symbol was replaced with the Apple logo. In OS X Lion 10.7, the Apple logo was slightly shrunk and a drop-in shadow was added. In OS X Yosemite 10.10, the white screen with a gray Apple logo was replaced with a black screen with a white Apple logo and the spinning wheel was replaced with a loading bar. However, this only applies to Macs from 2013 and later, including the 2012 Retina MacBook Pros, and requires a firmware update to be applied. All earlier Macs still use the old screen. The shadow on the Apple logo was removed in OS X El Capitan 10.11. In 2016 and later Macs (excluding the Early 2016 MacBook), the Apple logo appears immediately when the screen turns on. Happy Mac: The Face ID logo for the iPhone X was based on the Happy Mac. Bomb screen: With the introduction of Mac OS X, in addition to the blinking system folder icon, a prohibition icon was added to show an incorrect OS version is found. The bomb screen in the classic Mac OS was replaced with a kernel panic, which was originally colored white but was changed to black in version 10.3. Sad Mac: A Sad Mac is a symbol in older-generation Apple Macintosh computers (hardware using the Old World ROM and not Open Firmware, which are those predating onboard USB), starting with the original 128K Macintosh and ending with the last NuBus-based Power Macintosh models (including the first-generation 6100, 7100, 8100, as well as the PowerBook 5300 and 1400), to indicate a severe hardware or software problem that prevented startup from occurring successfully. The Sad Mac icon is displayed, along with a set of hexadecimal codes that indicate the type of problem at startup. Different codes are for different errors. This is in place of the normal Happy Mac icon, which indicates that the startup-time hardware tests were successful. In 68k models made after the Macintosh II, the Chimes of Death are played. Sad Mac: Models prior to the Macintosh II crash silently and display the Sad Mac, without playing any tone. PowerPC Macs play a sound effect of a car crash, and computers equipped with the PowerPC upgrade card use the three note brass fanfare death chime (A, E-natural, and E-flat), followed by the sound of a drum, same as the Macintosh Performa 6200 and Macintosh Performa 6300. Sad Mac: A Sad Mac may be deliberately generated at startup by pressing the interrupt switch on Macintosh computers that had one installed, or by pressing Command and Power keys shortly after the startup chime. On some Macintoshes such as PowerBook 540c, if the user presses the command and power keys before the boot screen displays, it will play the "chimes of death". The chimes are a fraction of normal speed and there is no Sad Mac displayed. Sad Mac: Old World ROM Power Macintosh and PowerBook models based on the PCI architecture do not use a Sad Mac icon and will instead only play the error/car-crash sound on a hardware failure (such as missing or bad memory, unusable CPU, or similar). Mac OS X 10.2 Jaguar and later instead use the Universal "no" symbol to denote a hardware or software error that renders the computer non-bootable. Chimes of Death: The Chimes of Death are the Macintosh equivalent of a beep code on IBM PC compatibles. On all Macintosh models predating the adoption of PCI and Open Firmware, the Chimes of Death are often accompanied by a Sad Mac icon in the middle of the screen. More information about the Sad Mac is above. Chimes of Death: Different Macintosh series have different death chimes. The Macintosh II is the first to use the death chimes, a loud and eerie upward major arpeggio, with different chimes on many models. The Macintosh Quadra, Centris, Performa (including the 6200 and 6300, which were also Power Macintosh models, only occurring after the screen lights up), LC, and the Macintosh Classic II play a generally softer and lower pitched version of the upward major arpeggio, followed by three or four notes, with slight variation depending on the model of the Macintosh. The PowerBook 5300, 190, and 1400 use the second half of the 8-note arpeggio as found on the Quadra and Centris models, or the entire death chime if the error occurs before the screen lights up. The Macintosh Quadra 660AV and Centris 660AV use a sound of a single pass of Roland D-50's "Digital Native Dance" sample loop, and the NuBus based Power Macintosh models (including 6100, 7100, and 8100) series use a car crash sound. The Power Macintosh and Performa 6200 and 6300 series, before the screen comes on for these models, along with the Power Macintosh upgrade card, use an eerily dramatic 3-note brass fanfare with a rhythm of drums and cymbals; the former plays the 8-note arpeggio instead when the screen is on. The pre-G3 PCI Power Macs, the beige G3 Power Macs, the G3 All-In-One, and the PowerBook 2400, 3400, and G3 all use a sound of glass shattering; these models do not display a Sad Mac icon. Since the introduction of the iMac in 1998, the Chimes of Death are no longer used in favor of a series of tones to indicate hardware errors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methanol dehydrogenase (cytochrome c)** Methanol dehydrogenase (cytochrome c): Methanol dehydrogenase (cytochrome c) (EC 1.1.2.7, methanol dehydrogenase, MDH) is an enzyme with systematic name methanol:cytochrome c oxidoreductase. This enzyme catalyses the following chemical reaction a primary alcohol + 2 ferricytochrome cL ⇌ an aldehyde + 2 ferrocytochrome cL + 2 H+A periplasmic quinoprotein alcohol dehydrogenase is only present in methylotrophic bacteria.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fibromatosis colli** Fibromatosis colli: Fibromatosis colli (FMC), also termed sternocleidomastoid tumor of infancy, pseudotumor of infancy, and infancy sternocleidomastoid pseudotumor, is an uncommon (incidence: 0.4%–1.3% of live births), congenital tumor in one of the two sternocleidomastoid neck muscles although rare cases have presented with a FMC tumor in both sternocleidomastoid muscles. A tumor is here defined as a growth of tissue that is not coordinated with the normal surrounding tissue and persists in growing even if the original trigger for its growth is removed. FMC tumors are benign growths that may cause disfigurements but are not cancers and do not metastasize (i.e. spread) to distant tissues.As judged by microscopic cytology analyses, fibromatosis colli tumors consist of spindle-shaped fibroblasts (i.e. the most common cell type in connective tissue) located in a background of collagen fibers, decomposing skeletal muscle fibers, and, in some cases, regenerating skeletal muscle fibers. The fibroblasts have a completely normal appearance with no evidence suggesting that they are malignant. The World Health Organization in 2020 classified fibromatosis colli in the category of benign fibroblastic and myofibroblastic tumors.In the majority of cases, FMC tumors decrease in size and completely resolve by the newborn's second year. If left untreated, however, a significant percentage of cases progress to, and are the most frequent cause of, congenital muscular torticollis, i.e. an abnormal, asymmetrical head or neck position commonly called wry neck. Untreated FMC tumors may also progress to facial asymmetry, plagiocephaly (i.e. flattened head), permanent loss of neck mobility, scoliosis (i.e. sidewise curvature of the spine), or other structural disfigurements that result from compensatory mechanisms. Presentation: FMC tumors most commonly present as slow-growing, firm, mobile, nontender, spindle-shaped masses in the lower two-thirds of the sternocleidomastoid muscle of infants within 8 weeks (average: 24 days) of delivery. At diagnosis, these infants' heads may tilt toward the side with the mass while their chins tilt to the opposite, uninvolved side. This tilting is due to sternocleidomastoid muscle contracture. FMC tumors are more common in males and the right sternocleidomastoid muscle although very rare cases present with bilateral tumors. Untreated, the masses may continue to grow for weeks after birth but then stabilize, start regressing after 4–8 months of life, and over the ensuing 1–2 years typically fully resolve. In three studies, >25%, >50% and >80% of these infants have had difficult deliveries such as a breech birth, delivery requiring forceps, primigravida birth (i.e. mother's first child), prolonged, difficult labor, or delivery by Caesarean section. From 6-20% of newborns with fibromatosis colli also present with other congenital lesions such as hip dysplasia or, less commonly, facial asymmetry. Pathology: Microscopic histopathological analyses of biopsied FMC tumor tissues typically find benign-appearing, spindle-shaped fibroblasts, decomposing skeletal muscle fibers, and, in some cases, regenerating skeletal muscle fibers in a collagen fiber-containing background. If necessary, these tumors are typically diagnosed by microscopic examination of fine-needle aspiration samples rather than the more invasive approach of tumor biopsy sampling. The aspirates show scant to moderately cellular, scattered, oval-shaped to spindle-shaped fibroblasts, naked nuclei (i.e. cell nuclei virtually devoid of other cell elements such as the cytoplasm), wisps of collagen, atrophic, degenerating muscle fibers, regenerating muscle fibers, and intact skeletal muscle cells containing multiple nuclei. There is no evidence of inflammation, hemorrhage, cell necrosis, or rapidly dividing and/or proliferating cells. Etiology: It has been postulated that the FMC tumor mass itself is a remnant of a hematoma caused by tissue injury occurring during delivery. However, there are no features of hematomas (e.g. hemosiderin released by dying red blood cells) in theses lesions at diagnosis and no evidence of trauma (e.g. overlying skin changes or discolorations) at birth in infants who later present with FMC tumors. It has also been suggested that venous outflow obstruction occurring in the fetus while in the uterus or during delivery leads to degeneration of sternocleidomastoid muscle fibers and subsequent fibrosis of the damaged areas. Finally, it has been suggested that an injury due to poor fetal head positioning in the uterus produces a compartment syndrome-like pressure-induced injury to the sternocleidomastoid muscle that results in muscle cell death followed by tissue fibrosis and the usual pathology of FMC tumors. Diagnosis: The recommended first step after clinically detecting a sternocleidomastoid mass in a newborn infant is Doppler ultrasonography combined with duplex ultrasonography. This method commonly reveals diagnostic (up to 100% sensitivity rates) findings of diffuse or focal enlargement of sternocleidomastoid muscle that has no cysts or other ultrasonography-detected abnormalities. If these imaging findings are not diagnostic, magnetic resonance imaging may clarify the diagnosis. If the findings still remain unclear, microscopic examination of fine needle asperates taken from the tumor will, when combined with the imaging findings, the lesion’s natural history, and its clinical presentation, likely confirm the diagnosis of FMC in virtually all cases. Treatment and prognosis: Prompt diagnosis and treatment of FMS is crucial for avoiding the impairments that may follow long-term mal-positioning of infant's head such as permanent facial asymmetry, flattened head, loss of neck mobility, and scoliosis. About 95% of infants with minimal limitations in their head's mobility show improvements in this mobility after four weeks of an active home stimulation program (i.e. observation, massage, and active and passive stretching), while ~91% of infants with more severe movement limitations show good results after three sessions of targeted physiotherapy over a 3–4 month period. The recommended treatment for infants who show no improvements in mobility after one year of physical therapy or who initially present at >12 months of age is surgical tenotomy (i.e. cutting) of one of the tendons on the involved sternocleidomastoid muscle. Tenotomy may use the open surgical (i.e. cutting of skin) approach or endoscopic (i.e. percutaneous) approach. The overall prognosis of FMS tumors is good using these treatment measures.Infants presenting with FMS should be examined for the presence of hip dysplasia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Betamethasone phosphate** Betamethasone phosphate: Betamethasone sodium phosphate is a synthetic glucocorticoid corticosteroid and a corticosteroid ester.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**De Bruijn–Newman constant** De Bruijn–Newman constant: The de Bruijn–Newman constant, denoted by Λ and named after Nicolaas Govert de Bruijn and Charles Michael Newman, is a mathematical constant defined via the zeros of a certain function H(λ,z), where λ is a real parameter and z is a complex variable. More precisely, := cos ⁡(zu)du ,where Φ is the super-exponentially decaying function Φ(u)=∑n=1∞(2π2n4e9u−3πn2e5u)e−πn2e4u and Λ is the unique real number with the property that H has only real zeros if and only if λ≥Λ. De Bruijn–Newman constant: The constant is closely connected with Riemann's hypothesis concerning the zeros of the Riemann zeta-function: since the Riemann hypothesis is equivalent to the claim that all the zeroes of H(0, z) are real, the Riemann hypothesis is equivalent to the conjecture that Λ≤0. Brad Rodgers and Terence Tao proved that Λ<0 cannot be true, so Riemann's hypothesis is equivalent to Λ = 0. A simplified proof of the Rodgers–Tao result was later given by Alexander Dobner. History: De Bruijn showed in 1950 that H has only real zeros if λ ≥ 1/2, and moreover, that if H has only real zeros for some λ, H also has only real zeros if λ is replaced by any larger value. Newman proved in 1976 the existence of a constant Λ for which the "if and only if" claim holds; and this then implies that Λ is unique. Newman also conjectured that Λ ≥ 0, which was then proven by Brad Rodgers and Terence Tao in 2018. Upper bounds: De Bruijn's upper bound of Λ≤1/2 was not improved until 2008, when Ki, Kim and Lee proved Λ<1/2 , making the inequality strict.In December 2018, the 15th Polymath project improved the bound to 0.22 . A manuscript of the Polymath work was submitted to arXiv in late April 2019, and was published in the journal Research In the Mathematical Sciences in August 2019.This bound was further slightly improved in April 2020 by Platt and Trudgian to 0.2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GCKey** GCKey: GCKey (French: CléGC) is a standards-based authentication service provided by the Government of Canada. It provides Canadians with secure access to online information and government services and assists Canadian federal government departments in managing and controlling access to their on-line programs through the provisioning of standardized registration and authentication processes. The GCKey Service issues a GCKey, which is a unique, anonymous credential that protects communications with online Government programs and services.The GCKey Service is logically divided into two high level components: The Credential Service is responsible for the registration and management of user credentials for individuals participating in the Government of Canada Federation for authentication to online services; and The Authentication Service is responsible for the creation of Security Assertion Markup Language (SAML) assertions confirming to Service Providers (SPs) that a successful user authentication dialog has taken place.The GCKey Service became operational starting in September, 2012 and is available to all Canadians.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vector signal analyzer** Vector signal analyzer: A vector signal analyzer is an instrument that measures the magnitude and phase of the input signal at a single frequency within the IF bandwidth of the instrument. The primary use is to make in-channel measurements, such as error vector magnitude, code domain power, and spectral flatness, on known signals. Vector signal analyzers are useful in measuring and demodulating digitally modulated signals like W-CDMA, LTE, and WLAN. These measurements are used to determine the quality of modulation and can be used for design validation and compliance testing of electronic devices. Operation: The vector signal analyzer spectrum analysis process typically has a down-convert & digitizing stage and a DSP & display stage. Down-convert and digitize stage A vector signal analyzer operates by first down-converting the signal spectra by using superheterodyne techniques. A portion of the input signal spectrum is down-converted (using a voltage-controlled oscillator and a mixer) to the center frequency of a band-pass filter. The use of a voltage-controlled oscillator allows consideration of different carrier frequencies. After the conversion to an intermediate frequency, the signal is filtered in order to band-limit the signal and prevent aliasing. The signal is then digitized using an analog-to-digital converter. Sampling rate is often varied in relation to the frequency span under consideration. DSP and display stage Once the signal is digitized, it is separated into quadrature and in-phase components using a quadrature detector, which is typically implemented with a discrete Hilbert transform. Several measurements are made and displayed using these signal components and various DSP processes, such as the ones below. Signal spectrum from FFT A FFT is used to compute the frequency spectrum of the signal. Usually there is a windowing function option to limit spectral leakage and enhance frequency resolution. This window is implemented by multiplying it with the digitized values of the sample period before computing the FFT. Constellation diagram A constellation diagram represents a signal modulated by a digital modulation scheme such as quadrature amplitude modulation or phase-shift keying. This diagram maps the magnitude of the quadrature and in-phase components to the vertical and horizontal directions respectively. Qualitative assessments of signal integrity can be made based on interpretation of this diagram. Error vector magnitude By representing the quadrature and in-phase components as the vertical and horizontal axes, the error vector magnitude can be computed as the distance between the ideal and measured constellation points on the diagram. This requires knowledge of the modulated signal in order to compare the received signal with the ideal signal. Typical functionality Typical vector signal analyzer displays feature the spectrum of the signal measured within the IF bandwidth, a constellation diagram of the demodulated signal, error vector magnitude measurements, and a time-domain plot of the signal. Many more measurement results can be displayed depending on the type of modulation being used (symbol decoding, MIMO measurements, radio frame summary, etc.).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cellular apoptosis susceptibility protein** Cellular apoptosis susceptibility protein: The cellular apoptosis susceptibility protein (CAS) is an exportin which in the nucleus is bound to RanGTP. Function: The Cas family of proteins are a family of proteins that induce cellular apoptosis and cell proliferation. Apoptosis is a specialized sequence of events that a cell can induce for programmed death. Cas is a 2 terminal protein, a N-terminal and a C-terminal, and there is a positive correlation between the presence of CAS and the degree of cellular proliferation. Along with this correlation, in the absence of the CAS protein in a cell, there is an inhibition of apoptosis Along with being an inducer of apoptosis, CAS also plays a role in being a checkpoint for cell cycle. Without the CAS protein, a cell will not be able to go beyond the G2 phase. It is in the nucleus of the cell, where its exportin function comes into play. Function: The Cas family of proteins can be divided into 4 functional domains: expression, interference, adaption, and ancillary. The expression domain help with crRNA binding and with binding of targets. The interference module helps with the cleave of a target. The adaption domain helps with spacer acquisition. Lastly, the ancillary domain helps with regulation of the gene and other CRISPR functions.The CRISPR-Cas family of protein is also divided into 3 different types, Type I, Type II, Type III. Each of the 3 types of CRISPR-Cas are characterized by a specific gene; Type I: Cas3, Type II: Cas 9, Type III: Cas 10.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turducken** Turducken: Turducken is a dish consisting of a deboned chicken stuffed into a deboned duck, further stuffed into a deboned turkey. Outside of the United States and Canada, it is known as a three-bird roast. Gooducken is an English variant, replacing turkey with goose. Turducken: The word turducken is a portmanteau combining turkey, duck, and chicken. The dish is a form of engastration, which is a recipe method in which one animal is stuffed inside the gastric passage of another—twofold in this instance.The thoracic cavity of the chicken/game hen and the rest of the gaps are stuffed, sometimes with a highly seasoned breadcrumb mixture or sausage meat, although some versions have a different stuffing for each bird. The result is a fairly solid layered poultry dish, suitable for cooking by braising, roasting, grilling, or barbecuing.The turducken was popularized in America by John Madden, who promoted the unusual dish during NFL Thanksgiving Day games and, later, Monday Night Football broadcasts. On one occasion, the commentator sawed through a turducken with his bare hand, live in the booth, to demonstrate the turducken's contents. Madden ate his first on-air turducken on December 1, 1996, during a game between the New Orleans Saints and St. Louis Rams at the Superdome. Origin: Credit for the creation of the turducken is uncertain, though it is generally agreed to have been popularized by Cajun chef Paul Prudhomme. The most common claimant is Hebert's Specialty Meats in Maurice, Louisiana, whose owners Junior and Sammy Hebert say they created it in 1985 "when a local man brought his own birds to their shop and asked the brothers to create the medley".In the United Kingdom, a turducken is a type of ballotine called a "three-bird roast" or a "royal roast". The Pure Meat Company offered a five-bird roast (a goose, a turkey, a chicken, a pheasant, and a pigeon, stuffed with sausage), described as a modern revival of the traditional Yorkshire Christmas pie, in 1989; and a three-bird roast (a duck stuffed with chicken stuffed with a pigeon, with sage and apple stuffing) in 1990.Gooducken is a goose stuffed with a duck, which is in turn stuffed with a chicken. Historical predecessors: In his 1807 Almanach des Gourmands, gastronomist Grimod de La Reynière presents his rôti sans pareil ("roast without equal")—a bustard stuffed with a turkey, a goose, a pheasant, a chicken, a duck, a guinea fowl, a teal, a woodcock, a partridge, a plover, a lapwing, a quail, a thrush, a lark, an ortolan bunting and a garden warbler—although he states that, since similar roasts were produced by ancient Romans, the rôti sans pareil was not entirely novel. The final bird is very small but large enough to just hold an olive; it also suggests that, unlike modern multi-bird roasts, there was no stuffing or other packing placed in between the birds. Historical predecessors: An early form of the recipe was "Pandora's cushion", a goose stuffed with a chicken stuffed with a quail.Another version of the dish is credited to French diplomat and gourmand Charles Maurice de Talleyrand-Périgord. The 1891 newspaper article French Legends Of The Table offers Quail à la Talleyrand: The following for instance, is Talleyrand's fanciful and somewhat roundabout way of roasting a quail. On a day of "inspiration gourmande" at his hotel in the Rue Saint-Florentin, he composed the following recipe: Take a plump quail, seasoned with truffles, and made tender by having been put into champagne. You put it carefully inside a young Bresse chicken; then sew up the opening, and put dabs of butter all over the chicken. Again, you put the chicken inside a fine Berri turkey, and roast the turkey very carefully before a bright fire. What will be the result? All the juice of the turkey is absorbed by the fowl, and all the juice of the fowl in its turn by the quail. After two hours roasting the fowl, which in reality is composed of three fowls, is ready, and you place the steaming trinity upon a dish of fine porcelain or chiseled silver. Then you pull the chicken out of the turkey, and the quail out of the chicken. The quail? Is it correct to talk of the quail, when this delicious, perfumed dish is indeed too good for any name? You take the quail as you would some sacred relic, and serve it hot, steaming, with its aroma of truffles, after having roasted it to a golden yellow by basting it diligently with the best Gournay butter. Historical predecessors: In Hunan cuisine, the famed chef Liu Sanhe from Changsha invented a dish called sanceng taoji (Chinese: 三层套鸡), meaning "three-layer set chicken", consisting of a sparrow inside a pigeon inside a hen, along with medicinal herbs such as Gastrodia elata and wolfberries. He originally devised the dish to alleviate Lu Diping's ill concubine of headaches. Historical predecessors: The book Passion India: The Story of the Spanish Princess of Kapurthula (p. 295) features a section that recounts a similar dish in India in the late 1800s: Invited by Maharajah Ganga Singh to the most extraordinary of dinners, in the palace at Bikaner, when Anita asks her host for the recipe of such a succulent dish, he answers her seriously, "Prepare a whole camel, skinned and cleaned, put a goat inside it, and inside the goat a turkey and inside the turkey a chicken. Stuff the chicken with a grouse and inside that put a quail and finally inside that a sparrow. Then season it all well, place the camel in a hole in the ground and roast it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fill dirt** Fill dirt: Fill dirt (also called cleanfill, or just fill) is earthy material which is used to fill in a depression or hole in the ground or create mounds or otherwise artificially change the grade or elevation of real property.Fill dirt is usually subsoil (soil from beneath topsoil) and underlying soil parent material which has little soil organic matter or biological activity. Fill dirt is taken from a location where soil is being removed as a part of leveling an area for construction; it may also contain sand, rocks, and stones, as well as earth. Fill dirt should be as free of organic matter as possible since organic matter will decompose creating pockets of empty space within the fill which could result in settling. Uneven or excessive settling of the fill can result in damage to any structures built on the fill. Fill dirt: A common use of fill dirt is in highway maintenance to build up the shoulders of highways so that the ground on either side of the pavement is at the same level as the pavement itself and that the highway shoulders are sufficiently wide as to allow vehicles room to pull off of the highway if needed. Fill dirt: A second common use of fill dirt is to fill in a low-lying construction site to raise the level of the building foundation in order to reduce the chances of flooding. Several massive uses of fill dirt are with improvements to the Port of Seattle Sea-Tac Airport, the addition of a new runway to the Hartsfield-Jackson Atlanta International Airport in Atlanta, Georgia, and the Kansai International Airport off the coast of Osaka, Japan, a project involving the creation of a new man-made island of some five square kilometers. Fill dirt: Fill dirt is most often mined from commercial sand and gravel mines then imported to the project site, and must meet specifications for gradation outlined by the Project's Geotechnical Engineer. The logistics and availability of fill dirt material has become a growing concern for the commercial sand and gravel industry in recent years as the need for fill material has surged and the available resources in mines are depleted. This directly impacts the public and end-user as the cost of construction increases due to the logistical challenges of importing material from greater distances as materials grow more scarce. Fill dirt: In an effort to combat the costs and increasing logistical challenges related to dwindling sand and gravel stockpiles, some services are offering contractors and the public a way to exchange fill dirt materials in addition to locating operating sand and gravel mines. Internet based services allow consumers and contractors a way to locate free fill dirt by connecting them with another contractor or consumer in need of a dump site on a nearby project. Fill dirt: Fill dirt is also used for landscaping projects which involve the creation of ridges and earth structures for pools, waterfalls, and other water features as well as to break up a level area in order to provide more interesting textures to the landscape.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brachiocephalic artery** Brachiocephalic artery: The brachiocephalic artery (or brachiocephalic trunk) is an artery of the mediastinum that supplies blood to the right arm and the head and neck. It was previously known as the innominate artery, meaning unnamed artery. Brachiocephalic artery: It is the first branch of the aortic arch. Soon after it emerges, the brachiocephalic artery divides into the right common carotid artery and the right subclavian artery.There is no brachiocephalic artery for the left side of the body. The left common carotid, and the left subclavian artery, come directly off the aortic arch. However, there are two brachiocephalic veins. Structure: The brachiocephalic artery arises, on a level with the upper border of the second right costal cartilage, from the start of the aortic arch, on a plane anterior to the origin of the left carotid artery. It ascends obliquely upward, backward, and to the right to the level of the upper border of the right sternoclavicular articulation, where it divides into the right common carotid artery and right subclavian arteries. The artery then crosses the trachea in front of it obliquely from the left to the right, roughly at the middle of the trachea or the level of the ninth tracheal cartilage. Structure: Relations Brachiocephalic artery has relation with: anterior - left brachiocephalic vein and thymus posterior - trachea right - superior vena cava, right brachiocephalic vein, and pleura left - left common carotid artery and thymusThymus typically sits atop the brachiocephalic artery, and separates the artery from the posterior surface of the manubrium of sternum. Branches The thyroid ima artery ascends in front of the trachea to the lower part of the thyroid, which it supplies. Variation The innominate artery usually gives off no branches, but occasionally a small branch, the thyroid ima artery, arises from it. Other times, it gives off a thymic or bronchial branch. Thyroid ima artery varies greatly in size, and appears to compensate for deficiency or absence of one of the other thyroid vessels. It occasionally arises from the aorta, the right common carotid, the subclavian or the internal mammary. Development: Aortic sac is the embryological precursor of proximal portion of the aortic arch. It is chronologically the first portion of the aorta to form, and appears as a dilation superior to the truncus arteriosus. Between the two horns of aortic sac, right horn gives rise to the brachiocephalic artery. Then the right horn fuses with the right-sided third and fourth aortic arches, which give rise to the right common carotid artery and the proximal right subclavian artery respectively. Eventually, brachiocephalic artery is derived from ventral aorta, same as ascending aorta. Left horn forms proximal ascending portion of aorta. Function: Brachiocephalic artery brings blood from heart to right arm, head, and neck. Clinical significance: Innominate artery aneurysms represents 3% of all arterial aneurysms. Since there is the risk of thromboembolic complications and spontaneous rupture, surgical repair is usually recommended on an early period. Innominate artery aneurysms can often present with signs of innominate artery compression syndrome and have a very high risk of rupture. The majority of IA aneurysms are due to atherosclerosis. Other causes include syphilis, tuberculosis, Kawasaki's disease, Takayasu's arteritis, Behçet's disease, connective tissue disease, and angiosarcoma.Tracheo-innominate artery fistula (TIF) is a surgical emergency with high mortality rates. Reported incidence is 0.1%-1.0% after tracheostomy. TIF is usually fatal once it bleeds. For the successful management of TIF, treatment should be initiated immediately with the special considerations kept in mind.Several abnormalities of brachiocephalic artery have been reported. A retroesophageal innominate artery is a rare congenital anomaly. Also, aberrant innominate artery crossing anterior to the trachea just below the thyroid isthmus was reported. Anterior neck surgeries such as bronchoscopies and mediastinoscopies are common and safe procedure, since operating around the trachea, no major vessel is encountered in the surgical field. However when this type of abnormality is encountered, even minor trauma can lead to mass bleeding culminating in death. Aberrant innominate artery can cause incomplete vascular ring. It does not completely encircle the trachea and esophagus, but some compress either the trachea or esophagus. Anomalous innominate artery originates later from the transverse arch and then crosses the trachea causing anterior tracheal compression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ATP:ADP antiporter family** ATP:ADP antiporter family: The ATP:ADP Antiporter (AAA) Family (TC# 2.A.12) is a member of the major facilitator superfamily. Members of the AAA family have been sequenced from bacteria and plants. Structure and function: One protein from the obligate intracellular bacterial parasite, Rickettsia prowazekii, is of 498 amino acyl residues, and is believed to span the membrane 12 times. The transporter is an obligate exchange translocase specific for ATP and ADP. It functions to take up ATP from the eukaryotic cell cytoplasm into the bacterium in exchange for ADP. The ATP/ADP uniporters can also transport inorganic phosphate, but not ribonucleoside and monophosphates, as well as deoxyribonucleotides. Transport reaction: The transport reaction catalyzed by the antiporters is: ATP (out) + ADP (in) ⇌ ATP (in) + ADP (out) Homology: The AAA family proteins are distantly related to members of the major facilitator superfamily, and are not related to the mitochondrial ATP/ADP exchangers of the mitochondrial carrier family which pump ATP out of mitochondria in accordance with the polarity of the mitochondrial membrane potential.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monte Carlo N-Particle Transport Code** Monte Carlo N-Particle Transport Code: Monte Carlo N-Particle Transport (MCNP) is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation transport code designed to track many particle types over broad ranges of energies and is developed by Los Alamos National Laboratory. Specific areas of application include, but are not limited to, radiation protection and dosimetry, radiation shielding, radiography, medical physics, nuclear criticality safety, detector design and analysis, nuclear oil well logging, accelerator target design, fission and fusion reactor design, decontamination and decommissioning. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori. Monte Carlo N-Particle Transport Code: Point-wise cross section data are typically used, although group-wise data also are available. For neutrons, all reactions given in a particular cross-section evaluation (such as ENDF/B-VI) are accounted for. Thermal neutrons are described by both the free gas and S(α,β) models. For photons, the code accounts for incoherent and coherent scattering, the possibility of fluorescent emission after photoelectric absorption, absorption in pair production with local emission of annihilation radiation, and bremsstrahlung. A continuous-slowing-down model is used for electron transport that includes positrons, k x-rays, and bremsstrahlung but does not include external or self-induced fields. Monte Carlo N-Particle Transport Code: Important standard features that make MCNP very versatile and easy to use include a powerful general source, criticality source, and surface source; both geometry and output tally plotters; a rich collection of variance reduction techniques; a flexible tally structure; and an extensive collection of cross-section data. MCNP contains numerous flexible tallies: surface current & flux, volume flux (track length), point or ring detectors, particle heating, fission heating, pulse height tally for energy or charge deposition, mesh tallies, and radiography tallies. Monte Carlo N-Particle Transport Code: The key value MCNP provides is a predictive capability that can replace expensive or impossible-to-perform experiments. It is often used to design large-scale measurements providing a significant time and cost savings to the community. LANL's latest version of the MCNP code, version 6.2, represents one piece of a set of synergistic capabilities each developed at LANL; it includes evaluated nuclear data (ENDF) and the data processing code, NJOY. The international user community's high confidence in MCNP's predictive capabilities are based on its performance with verification and validation test suites, comparisons to its predecessor codes, automated testing, underlying high quality nuclear and atomic databases and significant testing by its users. History: The Monte Carlo method for radiation particle transport has its origins at LANL dates back to 1946. The creators of these methods were Drs. Stanislaw Ulam, John von Neumann, Robert Richtmyer, and Nicholas Metropolis. Monte Carlo for radiation transport was conceived by Stanislaw Ulam in 1946 while playing Solitaire while recovering from an illness. "After spending a lot of time trying to estimate success by combinatorial calculations, I wondered whether a more practical method...might be to lay it out say one hundred times and simply observe and count the number of successful plays." In 1947, John von Neumann sent a letter to Robert Richtmyer proposing the use of a statistical method to solve neutron diffusion and multiplication problems in fission devices. His letter contained an 81-step pseudo code and was the first formulation of a Monte Carlo computation for an electronic computing machine. Von Neumann's assumptions were: time-dependent, continuous-energy, spherical but radially-varying, one fissionable material, isotropic scattering and fission production, and fission multiplicities of 2, 3, or 4. He suggested 100 neutrons each to be run for 100 collisions and estimated the computational time to be five hours on ENIAC. Richtmyer proposed suggestions to allow for multiple fissionable materials, no fission spectrum energy dependence, single neutron multiplicity, and running the computation for computer time and not for the number of collisions. The code was finalized in December 1947. The first calculations were run in April/May 1948 on ENIAC. History: While waiting for ENIAC to be physically relocated, Enrico Fermi invented a mechanical device called FERMIAC to trace neutron movements through fissionable materials by the Monte Carlo method. Monte Carlo methods for particle transport have been driving computational developments since the beginning of modern computers; this continues today. History: In the 1950s and 1960s, these new methods were organized into a series of special-purpose Monte Carlo codes, including MCS, MCN, MCP, and MCG. These codes were able to transport neutrons and photons for specialized LANL applications. In 1977, these separate codes were combined to create the first generalized Monte Carlo radiation particle transport code, MCNP. In 1977, MCNP was first created by merging MCNG with MCP to create MCNP. The first release of the MCNP code was version 3 and was released in 1983. It is distributed by the Radiation Safety Information Computational Center in Oak Ridge, TN. Monte Carlo N-Particle eXtended: Monte Carlo N-Particle eXtended (MCNPX) was also developed at Los Alamos National Laboratory, and is capable of simulating particle interactions of 34 different types of particles (nucleons and ions) and 2000+ heavy ions at nearly all energies, including those simulated by MCNP. Both codes can be used to judge whether or not nuclear systems are critical and to determine doses from sources, among other things. MCNP6 is a merger of MCNP5 and MCNPX. Comparison: MCNP6 is less accurate than MCNPX. Geant4 is less accurate than MCNPX. Geant4 is less accurate than MCNP5.Geant4 is slower than MCNPX.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lightsaber** Lightsaber: A lightsaber is a fictional energy sword featured throughout the Star Wars franchise. A typical lightsaber is depicted as a luminescent plasma blade about 3 feet (0.91 m) in length emitted from a metal hilt around 10.5 inches (27 cm) in length. First introduced in the original Star Wars film, it has since appeared in most Star Wars films, with at least one lightsaber duel occurring in each installment of the "Skywalker saga". The lightsaber's distinct appearance was created using rotoscoping for the original films, and with digital effects for the prequel and sequel trilogies. Lightsaber: In the Star Wars universe, the lightsaber is the signature weapon of the light-side-wielding Jedi Order and the dark-side-wielding Sith Order. However, the lightsaber can also be wielded by non-Force-sensitive characters as an ordinary weapon or tool. The Jedi use different colored lightsabers (predominantly blue and green, though purple, white, and yellow have also appeared in canon media), while the Sith wield exclusively red-bladed sabers to distinguish themselves from the Jedi. The color of a lightsaber's blade is given by its power source, the kyber crystal, which is influenced by the wielder and the Force as they connect with and tune the crystal. A lightsaber's hilt is built by its wielder and is, therefore, unique in design. There are several variations outside of the traditional single-bladed lightsaber, such as the double-bladed lightsaber (most famously wielded by Darth Maul), crossguard lightsabers (used by Kylo Ren), and the Darksaber, forged by the Mandalorian Jedi Tarre Vizsla, but primarily wielded by the non-Force-sensitive Mandalorian rulers of Mandalore (including Pre Vizsla, Maul, Bo-Katan Kryze, Sabine Wren, Moff Gideon, and Din Djarin). Lightsaber: As presented in the films, a lightsaber's energy blade can cut, burn, and melt through most substances with little resistance. It leaves cauterized wounds in flesh, but can be deflected by another lightsaber blade, by energy shields, or by the metal beskar (found in Mandalorian armor). The blade has even been used as a tool to weld metal. Other times, the lightsaber has been shown to cause bleeding wounds in the flesh, sometimes accompanied by burns. Some exotic saber-proof melee weapons have been introduced in the Expanded Universe as well as later episodic films. An active lightsaber gives off a distinctive hum, which rises in pitch and volume as the blade is moved rapidly through the air. Bringing the blade into contact with another lightsaber's blade produces a loud crackle. Lightsaber: The lightsaber has become one of the most widely recognized elements of the Star Wars franchise. In 2008, a survey of approximately 2,000 film fans found it to be the most popular weapon in film history. Prop construction: For the original Star Wars film, the film prop hilts were constructed by John Stears from old Graflex press camera flash battery packs and other pieces of hardware. The full-sized sword props were designed to appear ignited onscreen, by later creating an "in-camera" glowing effect in post-production. The blade is a three-sided rod which was coated with a Scotchlite retroreflector array, the same sort used for highway signs. A lamp was positioned to the side of the taking camera and reflected towards the subject through 45-degree angled glass so that the sword would appear to glow from the camera's point of view. Prop construction: Set decorator Roger Christian found the handles for the Graflex Flash Gun in a photography shop in Great Marlborough Street, in London's West End. He then added cabinet T-track to the handles, securely attaching them with cyanoacrylate glue. Adding a few "greebles" (surface details), Christian managed to hand-make the first prototype of a lightsaber prop for Luke before production began. George Lucas decided he wanted to add a clip to the handle, so that Luke could hang it on his belt. Once Lucas felt the handle was up to his standards, it went to John Stears to create the wooden dowel rod with front-projection paint so that the animators would have a glow of light to enhance later on in post production. Due to lack of preparation time, Christian's prototype and a second spare were used for the shooting in Tunisia, where filming on Star Wars began. It was discovered, however, that the glowing effect was greatly dependent on the rod's orientation to the camera, and during the Obi-Wan Kenobi/Darth Vader duel, they could clearly be seen as rods. Because of this, the glow would be added in post-production through rotoscoping, which also allowed for diffusion to be employed to enhance the glow. Prop construction: While original trilogy hilts were typically constructed using found parts, during the prequel and sequel trilogies a different process was sometimes used. Hilts were first machined out of metal materials. Then casts would be made using the metal hilts to create resin copies that were used on screen. The resin was often molded over a metal rod that a dueling blade could be attached to for fight sequences. Prop construction: Visual effects Korean animator Nelson Shin, who was working for DePatie–Freleng Enterprises at the time, was asked by his manager if he could animate the lightsaber in the live-action scenes of a film. After Shin accepted the assignment, the live-action footage was given to him. He drew the lightsabers with a rotoscope, an animation which was superimposed onto the footage of the physical lightsaber blade prop. Shin explained to the people from Lucasfilm that since a lightsaber is made of light, the sword should look "a little shaky" like a fluorescent tube. He suggested inserting one frame that was much lighter than the others while printing the film on an optical printer, making the light seem to vibrate. Shin also recommended adding a degausser sound on top of the other sounds for the weapon since the sound would be reminiscent of a magnetic field. The whole process took one week, surprising his company. Lucasfilm showed Shin the finished product, having followed his suggestions to use an X-Acto knife to give the lightsaber a very sharp look, and to have sound accompany the weapon's movements. Prop construction: Sound The lightsaber sound effect was developed by sound designer Ben Burtt as a combination of the hum of idling interlock motors in aged movie projectors and interference caused by a television set on a shieldless microphone. Burtt discovered the latter accidentally as he was looking for a buzzing, sparking sound to add to the projector-motor hum.The pitch changes of lightsaber movement were produced by playing the basic lightsaber tone on a loudspeaker and recording it on a moving microphone, generating Doppler shift to mimic a moving sound source. Depiction: Lightsabers were present in the earliest drafts as mundane plasma weapons that were used alongside laser guns. The introduction of the Force in a later revision made the Jedi and the Sith supernaturally skilled; initially they were only portrayed as swordsmen. The lightsaber became the Force-user's tool, described in A New Hope by Obi-Wan Kenobi as "not as clumsy or random as a blaster. An elegant weapon, for a more civilized age." The source of a lightsaber's power is a kyber crystal. These crystals are also the power source of the Death Star's superlaser.In films such as Revenge of the Sith and The Last Jedi, melee weapons such as the electrostaff and plasma-lined blades deflect lightsabers. Depiction: Types Lightsabers are depicted as hand-built as part of a Jedi's or Sith's training regimen. Each lightsaber is unique, though some may bear resemblance to others, especially if there is a connection between the builders. The hilt of most lightsabers are straight and predominantly cylindrical, though there are other lightsaber hilt types. The first film appearance of a dual-bladed lightsaber (first depicted in the comic series Tales of the Jedi) was in The Phantom Menace, wielded by Darth Maul; it consists of two regular lightsabers joined at their butt ends each producing a blade independently. Count Dooku, beginning with the character's first appearance in Attack of the Clones, is shown to have a lightsaber with a curved hilt. The video game Star Wars: The Force Unleashed introduced two other variants: a lightsaber pike (a lightsaber with a shorter blade but a long handle, resembling a spear) and a Tonfa-style lightsaber with right-angle hilt. Depiction: The Star Wars expanded universe adds several lightsaber types, including short and dual-phase (adjustable length) weapons. In Star Wars Rebels, Ezra Bridger's original lightsaber is a hybrid that features a fully functional blaster pistol built into the handle. Kylo Ren, introduced in The Force Awakens, uses a lightsaber that features two crosshilt blades, giving it the appearance of a greatsword. His blade also has an unstable, fiery appearance, explained in canon reference books as stemming from a cracked kyber crystal. The Inquisitors of the Galactic Empire are depicted as wielding a unique variation of a double-bladed saber, mounted on a rotating ring enabling the blades 360 degrees of rotation and short-term flight capability. More obscure lightsaber variations, such as the "lightwhip", an elongated flexible blade used in a matter akin to a whip, the "lightclub", an enlarged standard lightsaber, and the "shoto", a dramatically smaller variation often paired with a standard sized saber have also made appearances. Depiction: Colors Lightsabers in the first two released films, A New Hope and The Empire Strikes Back, had blades that were either blue (for the Jedi) or red (for the Sith). Luke Skywalker's new lightsaber in Return of the Jedi was colored blue during the initial editing of the film, and appears so in both an early movie trailer and the official theatrical posters; it was later changed to green in the film's final edit after initial viewings and screen tests by the filmmakers, who felt that it would better stand out against the blue sky of Tatooine in outdoor scenes, and this color change is also reflected in the film's re-release posters. Mace Windu's purple-bladed lightsaber, as first seen in Attack of the Clones, was requested by the actor Samuel L. Jackson, who believed the color, which is his personal favorite, would make his character be easily recognized among other Jedi. Jackson is known to frequently request that the characters he plays use an item that is purple in color. The Clone Wars showed the guardians of the Jedi Temple wielding yellow-bladed lightsabers, and, at the end of The Rise of Skywalker, Rey is shown to have built a yellow-bladed lightsaber using part of her staff as the hilt. Depiction: As depicted in The Clone Wars and Rebels, the builder of a lightsaber finds a kyber crystal and meditates with it until the crystal acquires a color. The color of this crystal becomes the blade's color when installed into a lightsaber hilt. In the book Star Wars: Ahsoka and the comic series Darth Vader: Dark Lord of the Sith, it is shown that dark side users remove the crystal from a defeated Jedi's lightsaber and concentrate Force energy on it to break its connection to the light side, a process known as "bleeding" to create a red crystal. The process can also be reversed, as shown in Ahsoka, when the titular character does so to a pair of crystals taken from an Inquisitor. She uses them in the pair of white-bladed lightsabers she builds at the end of the novel. Depiction: The Darksaber is a unique lightsaber that has a distinct black blade with a white halo, introduced in Star Wars: The Clone Wars (2008) and subsequently appearing in Star Wars Rebels, where it is described as an ancient lightsaber created by Tarre Vizla, the first Mandalorian to become a Jedi, and later serves as a symbol of Mandalorian authority in the hands of Bo-Katan Kryze. It subsequently appears briefly in the hands of Moff Gideon in the season one finale of The Mandalorian, to whom Kryze had previously surrendered the weapon. By the end of the second season's finale, it belongs to series protagonist Din Djarin, who has bested Gideon for it. In the second episode of the third season of the show, the weapon (nominally) returns to Kryze's ownership, after she defeats a cyborg creature that captures Djarin (it remains in Din Djarin's possession, however), and she officially reclaims the blade in the season's sixth episode. It is destroyed by Gideon in the third season finale. Depiction: Other colors have appeared in various expanded media projects, including many video games where the player can select their character's lightsaber color. Depiction: Choreography The technical lightsaber choreography for the original Star Wars trilogy was developed by Hollywood sword-master Bob Anderson. Anderson personally trained Mark Hamill (Luke Skywalker) and, in The Empire Strikes Back and Return of the Jedi, performed all the stunts as Darth Vader during the lightsaber duels wearing Vader's costume. Anderson's role in the trilogy was highlighted in the film Reclaiming the Blade where he shared his experiences as a fencer developing the lightsaber techniques for the three original movies. Depiction: The lightsaber duels in the Star Wars prequel trilogy were specifically choreographed by stunt-coordinator Nick Gillard to be miniature "stories". For these films, Gillard was the primary sword instructor for Liam Neeson (Qui-Gon Jinn), Ewan McGregor (Obi-Wan Kenobi), Ray Park (Darth Maul) and Hayden Christensen (Anakin Skywalker / Darth Vader) among other actors. His goal in choreographing the action for The Phantom Menace was to create stunts that flow from the story; "You can't just think, 'I'm a stunt coordinator, I'm going to make a big stunt happen'," Gillard said. "It's all about making it tie in nicely with the film so that you don't notice the stunts."In writing the prequel trilogy, George Lucas said he wanted the lightsaber combat to be "reminiscent of what had been done in the previous films but also something that was more energized. We'd seen old men, young boys, and characters who were half-droid, but we'd never seen a Jedi in his prime. I wanted to do that with a fight that was faster and more dynamic—and we were able to pull that off."According to Gillard, various lightsaber combat styles were devised for the prequels and intended to further characterize their practitioners. Depiction: I developed different styles for the characters, and gave each of them a flaw or a bonus. So with Obi-Wan Kenobi, for instance, he's got a very business-like style—when he was younger he could border on the flashy and might twirl his lightsaber a bit, because he was taught by Qui-Gon. Qui-Gon was brash, that rubbed off on Obi-Wan and Obi-Wan then taught Anakin, who was way too old to learn anyway... I think the style really worked well. The Jedi style of fighting is an amalgamation of all the great swordfighting styles. Melding them together is the difficult part—to move from a Kendo style to, say, rapier requires a complete change in body and feet movement, and this must look effortless. The style moves seamlessly between the different disciplines, but remains technically correct throughout. Depiction: For The Phantom Menace, Gillard set out certain styles and faults for the saber-wielding characters. He added that the Jedi's use of such "a short-range weapon" meant "they would have to be very good at it"; combining a variety of disciplines from various sword fighting styles to martial arts "with a touch of tennis and tree chopping", he created the style seen in the Episode I lightsaber battles.For The Force Awakens, director J. J. Abrams decided to approach the choreography similarly to how it was done in the original trilogy. Abrams stated that the prequel trilogy choreography was "increasingly spectacular and stylized, almost like dance choreography", but that was not what they really wanted to go for in the new films. He told Empire magazine, "When you look at Star Wars and Empire, they are very different lightsaber battles, but for me they felt more powerful because they were not quite as slick. I was hoping to go for something much more primitive, aggressive and rougher, a throwback to the kind of heart-stopping lightsaber fights I remembered being so enthralled by as a kid." Cultural impact: Merchandise Since the release of the first film, replicas of lightsabers have been a popular piece of Star Wars merchandise, ranging from inexpensive plastic toys to the "Force FX" series from Master Replicas, deluxe replicas which use LED-lighted tubes and sound effects to create a close audio-visual representation of what is seen on screen. Cultural impact: Disney Parks Disneyland in California sells lightsaber-themed churros outside its Star Tours attraction.Disneyland and Disney World (Hollywood Studios) also sell legacy lightsabers which are replicas of the lightsabers seen used by the Jedi and Sith in the movies such as Darth Vader, Obi-Wan Kenobi, Rey Skywalker, Count Dooku, and Kylo Ren.Disneyland and Hollywood Studios also offer Savi's Workshop, a place where guests can build their own lightsaber and choose their own kyber crystal, thereby changing the blade color of their own unique lightsaber.Besides Savi's Workshop, there is another custom lightsaber experience. The Star Trader at Disneyland offers guests a chance to build their own lightsabers, without first paying 200 dollars for the experience. Cultural impact: Attractions The Jedi Training: Trials of the Temple is a live show where children are selected to learn the teachings of the Jedi Knights, the Force, and the basics of Lightsaber combat to become Padawan learners. The show is present at the Rebels stage next to Star Tours – The Adventures Continue attraction at Disney's Hollywood Studios and at the Tomorrowland Terrace at Disneyland. Cultural impact: Additionally, Star Wars: Galactic Starcruiser has incorporated lightsaber training for each guest aboard the spacecraft during their stay. The training is led by the Saja cast members and teaches the fourth form of lightsaber combat, Ataru to the guests. Both lightsabers and shields are used during training since the guests learn to use Ataru to deflect incoming projectiles. There is no lightsaber-to-lightsaber combat in this attraction and guests must be at least seven years old to participate. Cultural impact: Similar weapons The virtual reality rhythm game Beat Saber involves the player using two lightsabers in order to slash a series of oncoming squares. Cultural impact: Parodies In the 1987 film Spaceballs by Mel Brooks, "the Schwartz" is a play on "the Force", from Star Wars. The lightsabers emanating from the Schwartz-rings held in front of the crotch are phallic symbols. The cartoon series Futurama features many lightsaber-style weapons, notably expanding batons used by police. The batons glow and "whoosh" with a lightsaber's distinctive hum, but merely slap victims when used, as if they are plastic toys. In Jim Butcher's Dresden Files novel series, medical examiner and Star Wars fan Waldo Butters wields one of the three holy Swords of the Cross, which re-fashions itself into a lightsaber upon accepting him as its owner. In Yuya Sato's Danganronpa Togami light novel trilogy, main antagonist Orvin Elevator / Kazuya Togami wields a lightsaber built into their prosthetic arm, to which they are berated for copyright infringement by Genocider Syo / Genocide Jack; in the anime Danganronpa 3: The End of Hope's Peak High School, this same lightsaber is instead depicted as a flaming mechanical katana wielded by Kyosuke Munakata. Cultural impact: Games With the advent of motion-controlled video games, players were given the opportunity to wield an in-game lightsaber with their own hands. In the seventh generation of video game consoles, there were several Star Wars video games available on the Wii (Lego Star Wars: The Complete Saga, Star Wars: The Force Unleashed, Star Wars: The Clone Wars – Lightsaber Duels, Star Wars: The Clone Wars – Republic Heroes and Lego Star Wars III: The Clone Wars) and one on the Xbox 360 (Kinect Star Wars) that utilized motion controls to wield a lightsaber through arm gestures. Unleashed and Duels, both developed by Krome Studios, have more precise control of the lightsaber, allowing players to swing it in any of five different directions (up, down, left, right or forward) with the Wii Remote, while Kinect takes advantage of the eponymous, camera-based motion controller to grant the player a more fluid, one-to-one control method of swinging the lightsaber. Cultural impact: Prior to the seventh generation, there were also a few earlier Star Wars games that used gesture-based control to simulate lightsaber combat, such as the two bonus levels of the arcade game Star Wars Trilogy, where the player controls Luke Skywalker as he wields his lightsaber against Boba Fett and Darth Vader in Return of the Jedi by pushing a joystick in one of eight directions to follow on-screen offensive and defensive cues, and a TV game released around the time Revenge of the Sith came to theaters, titled Star Wars: Saga Edition – Lightsaber Battle Game, in which the player swings a lightsaber-shaped controller to deflect blaster bolts from infantry (such as battle droids and clone troopers) and duel against characters from across the saga. Cultural impact: By the time Disney purchased Lucasfilm, new technological advances made augmented reality possible, leading to the creation of some more notable motion-controlled lightsaber video games that took advantage of that feature. One of them came in the form of a special activity mode in the official Star Wars fan app on iOS and Android in which players use their smartphone's motion sensors to practice and master blaster deflection with a training droid (which appears on the phone's rear camera), similar to the deflection training exercises featured aboard the Millennium Falcon in A New Hope, while progressing through the ranks of the Jedi or Sith order. Another is in Star Wars: Jedi Challenges, which works with a Lenovo Mirage AR headset, a tracking sensor and a dedicated lightsaber controller that launched in December 2017. One of the multiple game modes available in Challenges, which was jointly developed by Disney and Lenovo, enables players to confront Star Wars villains in lightsaber duels, such as Darth Maul and Kylo Ren.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Columns (video game)** Columns (video game): Columns (Japanese: コラムス, Hepburn: Koramusu) is a match-three puzzle video game released by Sega in 1990. Designed by Jay Geertsen, it was released by Sega for arcades and then ported to several Sega consoles. The game was subsequently ported to home computer platforms, including the Atari ST. Gameplay: Columns was one of the many tile-matching puzzle games to appear after the great success of Tetris in the late 1980s. The area of play is enclosed within a tall, rectangular playing area. Columns of three different symbols (such as differently-colored jewels) appear, one at a time, at the top of the well and fall to the bottom, landing either on the floor or on top of previously-fallen "columns". While a column is falling, the player can move it left and right, and can also cycle the positions of the symbols within it. After a column lands, if three or more of the same symbols are connected in a horizontal, vertical, or diagonal line, those symbols disappear. The pile of columns then settles under gravity. If this resettlement causes three or more other symbols to align, they too disappear and the cycle repeats. Occasionally, a special column with a multicolor Magic Jewel appears. It destroys all the jewels with the same color as the one underneath it. The columns fall at a faster rate as the player progresses. The goal of the game is to play for as long as possible before the well fills up with jewels, which ends the game. Players can score up to 99,999,999 points.Some ports of the game offer alternate game modes as well. "Flash columns" involves mining their way through a set number of lines to get to a flashing jewel at the bottom. "Doubles" allows two players work together in the same well. "Time trial" involves racking up as many points as possible within the time limit. Ports: Sega ported the arcade game to the Mega Drive/Genesis console. This version of the game was nearly identical to the original arcade game.Columns was the first pack-in game for the Game Gear. This version was slightly different from the Mega Drive/Genesis version and its soundtrack was transposed and rearranged due to the limitations of the handheld's sound chip. While the columns themselves were updated for the Mega Drive/Genesis version, the overall decoration was less like a cartoon in the Game Gear version and instead more artistically designed. Lastly, the Game Gear version had a feature that let the player change the jewels to fruit, squares, dice, or playing card suits (clubs, diamonds, spades, and hearts). Ports: In 1990, Compile and Telenet Japan developed and published an MSX2 version. Ports: On November 7, 2006, Columns was released as part of the game Sega Genesis Collection for the PlayStation 2, and later on another release of the above compilation for PlayStation Portable. On December 4, 2006 the title was released on Nintendo's Virtual Console for 800 Wii Points. It is also included on Sonic's Ultimate Genesis Collection for the PlayStation 3 and Xbox 360. It was included as one of the games in the Sega Genesis Mini. It was also included as one of the games in the 2018 releases of Sega Genesis Classics for Windows, Linux, macOS, PlayStation 4, Xbox One, and Nintendo Switch. Most recently the game was ported to iOS by Sega, but the port was subsequently withdrawn by Sega. On December 15, 2022, the game was re-released on the Nintendo Switch Online + Expansion Pack. Music: Tokuhiko Uwabo composed the music for Columns. The songs "Clotho", "Atropos" and "Lathesis" (sic) are named after the Moirai from Greek mythology, related to the Greek flavor of some of the game's art. Reception: In Japan, Game Machine listed Columns on their April 15, 1990 issue as being the eighth most-successful table arcade unit of the month. It went on to be Japan's fourth highest-grossing arcade game of 1990 (below Capcom's Final Fight and Sega's Tetris and Super Monaco GP) and third highest-grossing arcade conversion kit of 1991 (below Capcom's Street Fighter II and Sega's Tetris).Reviewing the game's appearance in Sega Arcade Classics for the Sega CD, Glenn Rubenstein gave it a B+ rating in Wizard magazine, describing it as "like Tetris but a bit better." Mega placed the game at number 34 in their "Top Mega Drive Games of All Time". In 2017, Gamesradar ranked the game 40th on its "Best Sega Genesis/Mega Drive games of all time." Legacy: Many sequels and spin-offs were produced: Columns II: The Voyage Through Time, Columns III: Revenge of Columns, Columns '97, Sakura Taisen: Hanagumi Taisen Columns 1 & 2, and many compilations and re-releases (Columns Arcade Collection, Sega Ages Vol. 07: Columns) as well. Because Columns was made by Sega, versions were made available on the Master System, Mega Drive/Genesis, Sega CD, Game Gear, Saturn, and Dreamcast. Additional versions of the game have also been made available on PC-Engine, Game Boy Advance, and PlayStation 2. A Super Famicom version was released in Japan via the Nintendo Power service. The Game Boy Color version was specifically called Columns GB: Osamu Tezuka Characters, where it featured many of his characters such as Kimba and Astroboy, but also featured slightly less known characters such as Unico.Columns has also been cloned many times across different platforms:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Augmented reality-assisted surgery** Augmented reality-assisted surgery: Augmented reality-assisted surgery (ARAS) is a surgical tool utilizing technology that superimposes a computer-generated image on a surgeon's view of the operative field, thus providing a composite view for the surgeon of the patient with a computer generated overlay enhancing the operative experience. It can be used for training, preparation for an operation, or performance of an operation. ARAS can be performed using a wide array of technology, including an optical head-mounted display (OHMD)—such as the Google Glass XE 22.1 or Vuzix STAR 1200 XL—and a digital overlay from robotic and laparoscopic surgery feeds. The technique has been primarily been tested in the urological and cardiovascular domains. Specialized uses: A subset of called augmented reality-assisted urologic surgery (ARAUS) specifically aids with urological surgery. This intraoperative training tool was first described and utilized by Tariq S. Hakky, Ryan M. Dickey, and Larry I. Lipshultz within the Scott Department of Urology, Baylor College of Medicine, and Daniel R. Martinez, Rafael E. Carrion, and Philippe E. Spiess within the Sexual Medicine Program in the Department of Urology, at the University of South Florida. It was initially used to teach medical residents how to place a penile implant from start to finish via an application downloaded onto the OHMD. Intraoperatively, an optical display camera output feed combined with software allowing for the detection of points of interest enabled faculty to interact with residents during the placement of the penile implant. Both faculty and residents demonstrated a high degree of satisfaction of the ARAUS experience, and it was shown to be an effective tool in training urological surgical technique. Advantages of ARAUS include real-time feedback of residents during suy and superior visibility and interaction between faculty and residents.ARAS has also been applied to the cardiovascular realm. Terry Peters of the University of Western Ontario in London, Canada has teamed up with other researchers at the Robarts Research Institute to implement ARAS towards the goal of improving repairs to the heart's mitral valve and replacement of the aortic valve. In an interview for the Medical Augmented Reality Blog, Peters stated that his research team could not only use ARAS to "[improve] the speed and safety of the cardiac valve repair procedure"; they also conducted "the evaluation of an AR environment to plan brain-tumor removal, and the development of an ARF-enhanced system for ultrasound-guided spinal injections."Holosurical Inc has developed the clinically-tested ARAI™ surgical navigation system that provides real-time patient-specific 3D anatomical visualization for presurgical planning, intraoperative guidance, and postsurgical data analytics. The augmented reality component of the system allows the surgeon to focus their attention on the patient's internal anatomy, without actually exposing it. On January 10, 2019, HoloSurgical Inc completed the 1st spine surgery in the world using augmented reality, artificial intelligence-based navigation system. The system was developed by AI pioneer Paul Lewicki PhD, surgeon Kris Siemionow MD,PhD, and engineer Cristian Luciano PhD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spirangle** Spirangle: In geometry, a spirangle is a spiral polygonal chain. Spirangles are similar to spirals in that they expand from a center point as they grow larger, but they are made out of straight line segments, instead of curves. Spirangle vectographs are used in vision therapy to promote stereopsis and help resolve problems with hand–eye coordination. Two-dimensional spirangles: A two-dimensional spirangle is an open figure consisting of a line bent into angles similar to a corresponding polygon. The spirangle can start at a center point, or a distance from the center, and has some number of turns around the center point. Three-dimensional spirangles: Three-dimensional spirangles have layers that slant upward, progressively gaining height from the previous segment. This is similar to staircases in large buildings that turn at the top of each flight. The segments also may progressively lose an amount of length and resemble a pyramid. Uses: Ophthalmology — vectograms Electronics — printed inductors Architecture — ‘spiral’ staircases Jewelry — earrings, pendants Search algorithms — optimal scanning of a region of interest, for example a crime scene or a region of the celestial sphere
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Site plan** Site plan: A site plan or a plot plan is a type of drawing used by architects, landscape architects, urban planners, and engineers which shows existing and proposed conditions for a given area, typically a parcel of land which is to be modified. Sites plan typically show buildings, roads, sidewalks and paths/trails, parking, drainage facilities, sanitary sewer lines, water lines, lighting, and landscaping and garden elements.Such a plan of a site is a "graphic representation of the arrangement of buildings, parking, drives, landscaping and any other structure that is part of a development project".A site plan is a "set of construction drawings that a builder or contractor uses to make improvements to a property. Counties can use the site plan to verify that development codes are being met and as a historical resource. Site plans are often prepared by a design consultant who must be either a licensed engineer, architect, landscape architect or land surveyor".Site plans include site analysis, building elements, and planning of various types including transportation and urban. An example of a site plan is the plan for Indianapolis by Alexander Ralston in 1821. Site plan: The specific objects and relations shown are dependent on the purpose for creating the plot plan, but typically contain: retained and proposed buildings, landscape elements, above-ground features and obstructions, major infrastructure routes, and critical legal considerations such as property boundaries, setbacks, and rights of way… Site plan topics: Site analysis Site analysis is an inventory completed as a preparatory step to site planning, a form of urban planning which involves research, analysis, and synthesis. It primarily deals with basic data as it relates to a specific site. The topic itself branches into the boundaries of architecture, landscape architecture, engineering, economics, and urban planning. Site analysis is an element in site planning and design. Kevin A. Lynch, an urban planner developed an eight cycle step process of site design, in which the second step is site analysis, the focus of this section. Site plan topics: When analyzing a potential site for development, the status quo of the site should be analyzed and mapped. This includes but is not limited to: The location of the plot Topography, including information about slope, soils, hydrology, vegetation, orientation Existing buildings Roads and traffic Public facilities and utilities, including water, sewer, and power lines Related laws, regulation, codes, and policiesBy determining areas that are poor for development (such as floodplains or steep slopes) and better for development, the planner or architect can determine the optimal location for different functions or structures and create a design that works within the space. Site plan topics: Site plan building blocks A site plan is a top view, bird’s eye view of a property that is drawn to scale. A site plan can show: property lines outline of existing and proposed buildings and structures distance between buildings distance between buildings and property lines (setbacks) parking lots, indicating parking spaces driveways surrounding streets landscaped areas easements ground sign location utilities Site planning Site planning in landscape architecture and architecture refers to the organizational stage of the landscape design process. It involves the organization of land use zoning, access, circulation, privacy, security, shelter, land drainage, and other factors. Site planning includes the arrangement of buildings, roadways, utilities, landscape elements, topography, water features, and vegetation to achieve the desired site.In urban planning, site planning is done by city planners to develop a clear plan/design of what the city planners want for a community. For example, in a participatory planning process, community members would make claims of renovations and improvements that need to be done in their community. Then the community developers will come up with a way to meet the community members' demand, which is done by creating a site plan. With a limited budget, planners have to be smart and creative about their designs. Planners must take into consideration not only heights of buildings, traffic flows, open spaces, parking for cars/bikes, but also the project's potential impact to the stakeholders involved. All these actions of creating a site plan is referred to as site planning. Site plan topics: Transportation planning Transportation planning is the field involved with the siting of transportation facilities (generally streets, highways, sidewalks, bike lanes and public transport lines). Transportation planning historically has followed the rational planning model of defining goals and objectives, identifying problems, generating alternatives, evaluating alternatives, and developing the plan. Other models for planning include rational actor, satisficing, incremental planning, organizational process, and political bargaining. However, planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmentalism. For example, the use of behavioral psychology to persuade drivers to abandon their automobiles and use public transport instead. The role of the transport planner is shifting from technical analysis to promoting sustainability through integrated transport policies. Site plan topics: Urban planning Urban, city, and town planning explores a very wide range of aspects of the built and social environments of places. Regional planning deals with a still larger environment, at a less detailed level. Based upon the origins of urban planning from the Roman (pre-Dark Ages) era, the current discipline revisits the synergy of the disciplines of urban planning, architecture and landscape architecture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Online shopping** Online shopping: Online shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app. Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. As of 2020, customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones. Online shopping: An online shop evokes the physical analogy of buying products or services at a regular "brick-and-mortar" retailer or shopping center; the process is called business-to-consumer (B2C) online shopping. When an online store is set up to enable businesses to buy from another businesses, the process is called business-to-business (B2B) online shopping. A typical online store enables the customer to browse the firm's range of products and services, view photos or images of the products, along with information about the product specifications, features and prices. Online shopping: Online stores usually enable shoppers to use "search" features to find specific models, brands or items. Online customers must have access to the Internet and a valid method of payment in order to complete a transaction, such as a credit card, an Interac-enabled debit card, or a service such as PayPal. For physical products (e.g., paperback books or clothes), the e-tailer ships the products to the customer; for digital products, such as digital audio files of songs or software, the e-tailer usually sends the file to the customer over the Internet. The largest of these online retailing corporations are Alibaba, Amazon.com, and eBay. Terminology: Alternative names for the activity are "e-tailing", a shortened form of "electronic retail" or "e-shopping", a shortened form of "electronic shopping". An online store may also be called an e-web-store, e-shop, e-store, Internet shop, web-shop, web-store, online store, online storefront and virtual store. Mobile commerce (or m-commerce) describes purchasing from an online retailer's mobile device-optimized website or software application ("app"). These websites or apps are designed to enable customers to browse through a companies' products and services on tablet computers and smartphones. History: History of online shopping One of the earliest forms of trade conducted online was IBM's online transaction processing (OLTP) developed in the 1960s, which allowed the processing of financial transactions in real-time. The computerized ticket reservation system developed for American Airlines called Semi-Automatic Business Research Environment (SABRE) was one of its applications. There, computer terminals located in different travel agencies were linked to a large IBM mainframe computer, which processed transactions simultaneously and coordinated them so that all travel agents had access to the same information at the same time. At some point between 1971 and 1972, students at Stanford and MIT used the internet precursor ARPANET to make a deal to exchange marijuana, but the interaction doesn't qualify as e-commerce because no money was transferred online.The emergence of online shopping as it is known today developed with the emergence of the Internet. Initially, this platform only functioned as an advertising tool for companies, providing information about their products. It quickly moved on from this simple utility to actual online shopping transaction due to the development of interactive Web pages and secure transmissions. Specifically, the growth of the Internet as a secure shopping channel has developed since 1994, with the first sales of Sting's album Ten Summoner's Tales. Wine, chocolates, and flowers soon followed and were among the pioneering retail categories which fueled the growth of online shopping. Researchers found that having products that are appropriate for e-commerce was a key indicator of Internet success. Many of these products did well as they are generic products which shoppers did not need to touch and feel in order to buy. But also importantly, in the early days, there were few shoppers online and they were from a narrow segment: affluent, male, 30+. Online shopping has come a long way since those early days and – in the UK – accounts for significant percentage (depending on product category as percentages can vary). History: Growth in online shoppers As the revenues from online sales continued to grow significantly researchers identified different types of online shoppers, Rohm & Swaninathan identified four categories and named them "convenience shoppers, variety seekers, balanced buyers, and store-oriented shoppers". They focused on shopping motivations and found that the variety of products available and the perceived convenience of the buying online experience were significant motivating factors. This was different for offline shoppers, who were more motivated by time saving and recreational motives. History: English entrepreneur Michael Aldrich was a pioneer of online shopping in 1979. His system connected a modified domestic TV to a real-time transaction processing computer via a domestic telephone line. He believed that videotex, the modified domestic TV technology with a simple menu-driven human–computer interface, was a 'new, universally applicable, participative communication medium — the first since the invention of the telephone.' This enabled 'closed' corporate information systems to be opened to 'outside' correspondents not just for transaction processing but also for e-messaging and information retrieval and dissemination, later known as e-business. His definition of the new mass communications medium as 'participative' [interactive, many-to-many] was fundamentally different from the traditional definitions of mass communication and mass media and a precursor to the social networking on the Internet 25 years later. In March 1980 he launched Redifon's Office Revolution, which allowed consumers, customers, agents, distributors, suppliers and service companies to be connected online to the corporate systems and allow business transactions to be completed electronically in real-time. During the 1980s he designed, manufactured, sold, installed, maintained and supported many online shopping systems, using videotex technology. These systems which also provided voice response and handprint processing pre-date the Internet and the World Wide Web, the IBM PC, and Microsoft MS-DOS, and were installed mainly in the UK by large corporations. History: The first World Wide Web server and browser, created by Tim Berners-Lee in 1989, opened for commercial use in 1991. Thereafter, subsequent technological innovations emerged in 1994: online banking, the opening of an online pizza shop by Pizza Hut, Netscape's SSL v2 encryption standard for secure data transfer, and Intershop's first online shopping system. The first secure retail transaction over the Web was either by NetMarket or Internet Shopping Network in 1994. Immediately after, Amazon.com launched its online shopping site in 1995 and eBay was also introduced in 1995. Alibaba's sites Taobao and Tmall were launched in 2003 and 2008, respectively. Retailers are increasingly selling goods and services prior to availability through "pretail" for testing, building, and managing demand. International statistics: Statistics show that in 2012, Asia-Pacific increased their international sales over 30% giving them over $433 billion in revenue. That is a $69 billion difference between the U.S. revenue of $364.66 billion. It is estimated that Asia-Pacific will increase by another 30% in the year 2013 putting them ahead by more than one-third of all global e-commerce sales. The largest online shopping day in the world is Singles Day, with sales just in Alibaba's sites at US$9.3 billion in 2014. Customers: Online customers must have access to the Internet and a valid method of payment in order to complete a transaction. Generally, higher levels of education and personal income correspond to more favorable perceptions of shopping online. Increased exposure to technology also increases the probability of developing favorable attitudes towards new shopping channels.In addition, age is also a significant factor that affects online shopping. People feel that privacy and security factors have an even more significant impact on attitudes toward online shopping than product factors. Shoppers of different age groups have different perceptions of the risk factors of online shopping. Customer buying behaviour in digital environment: The marketing around the digital environment, customer's buying behaviour may not be influenced and controlled by the brand and firm, when they make a buying decision that might concern the interactions with search engine, recommendations, online reviews and other information. In modern shopping environments, people are more likely to use their mobile phones, computers, tablets and other digital devices to gather information. In an online shopping environment, interactive decision may have an influence on aid customer decision making, through online product reviews and user-generated content, typically provided through software from companies like Bazaarvoice and Trustpilot, or via social media. This content, which can include text or video-based reviews, customer photos, and feedback, is often displayed alongside products being sold on websites like Amazon, Target, and most other digital storefronts. Customer buying behaviour in digital environment: Subsequently, risk and trust would also are two important factors affecting people's' behavior in digital environments. Customers consider to switch between e-channels, because they are mainly influence by the comparison with offline shopping, involving growth of security, financial and performance-risks In other words, a customer shopping online that they may receive more risk than people shopping in stores. There are three factors may influence people to do the buying decision, firstly, people cannot examine whether the product satisfy their needs and wants before they receive it. Secondly, customer may concern at after-sale services. Finally, customer may afraid that they cannot fully understand the language used in e-sales. Based on those factors customer perceive risk may as a significantly reason influence the online purchasing behaviour.Online retailers has place much emphasis on customer trust aspect, trust is another way driving customer's behaviour in digital environment, which can depend on customer's attitude and expectation. Indeed, the company's products design or ideas can not met customer's expectations. Customer's purchase intention based on rational expectations, and additionally impacts on emotional trust. Moreover, those expectations can be also establish on the product information and revision from others.In several studies, perceived value, shopping style, and brand trust are the main factors that affect online consumers' decisions. The perceived value means that people can compare the products and prices online, bringing them the perceived value of getting more benefits online than in an offline store. The comfortable environment that online shopping brings to customers can make consumers get more perceived value. In the end, E-commerce behavior is still mostly influenced by families that are receptive to new technologies, and to a lesser extent by efficiency concerns. [1] Product selection: Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine. Users can compare and evaluate products using product information on the website, as well on other websites such as websites about product tests. Product selection: Once a particular product has been found and selected on the website of the seller, most online retailers use shopping cart software to allow the consumer to accumulate multiple items and to adjust quantities, like filling a physical shopping cart or basket in a conventional store. A "checkout" process follows (continuing the physical-store analogy) in which payment and delivery information is collected, if necessary. Some stores allow consumers to sign up for a permanent online account so that some or all of this information only needs to be entered once. The consumer often receives an e-mail confirmation once the transaction is complete. Less sophisticated stores may rely on consumers to phone or e-mail their orders (although full credit card numbers, expiry date, and Card Security Code, or bank account and routing number should not be accepted by e-mail, for reasons of security). Product selection: Impact of reviews on consumer behavior One of the great benefits of online shopping is the ability to read product reviews, written either by experts or fellow online shoppers. The Nielsen Company conducted a survey in March 2010 and polled more than 27,000 Internet users in 55 markets from the Asia-Pacific, Europe, Middle East, North America, and South America to look at questions such as "How do consumers shop online?", "What do they intend to buy?", "How do they use various online shopping web pages?", and the impact of social media and other factors that come into play when consumers are trying to decide how to spend their money on which product or service. According to the research, reviews on electronics (57%) such as DVD players, cellphones, or PlayStations, and so on, reviews on cars (45%), and reviews on software (37%) play an important role in influencing consumers who tend to make purchases online. Furthermore, 40% of online shoppers indicate that they would not even buy electronics without consulting online reviews first. Product selection: In addition to online reviews, peer recommendations on online shopping pages or social media websites play a key role for online shoppers when they are researching future purchases. 90% of all purchases made are influenced by social media. Payment: Online shoppers commonly use a credit card or a PayPal account in order to make payments. However, some systems enable users to create accounts and pay by alternative means, such as: Billing to mobile phones and landlines Bitcoin or other cryptocurrencies Cash on delivery (C.O.D.) Cheque/ Check Debit card Direct debit in some countries Electronic money of various types Gift cards Invoice, especially popular in some markets/countries, such as Switzerland Postal money order Wire transfer/delivery on paymentSome online shops will not accept international credit cards. Some require both the purchaser's billing and shipping address to be in the same country as the online shop's base of operation. Other online shops allow customers from any country to send gifts anywhere. The financial part of a transaction may be processed in real time (e.g. letting the consumer know their credit card was declined before they log off), or may be done later as part of the fulfillment process. Product delivery: Once a payment has been accepted, the goods or services can be delivered in the following ways. For physical items: Package delivery: The product is shipped to a customer-designated address. Retail package delivery is typically done by the public postal system or a retail courier such as FedEx, UPS, DHL, or TNT. Drop shipping: The order is passed to the manufacturer or third-party distributor, who then ships the item directly to the consumer, bypassing the retailer's physical location to save time, money, and space. In-store pick-up: The customer selects a local store using a locator software and picks up the delivered product at the selected location. This is the method often used in the bricks and clicks business model.For digital items or tickets: Downloading/Digital distribution: The method often used for digital media products such as software, music, movies, or images. Product delivery: Printing out, provision of a code for, or e-mailing of such items as admission tickets and scrip (e.g., gift certificates and coupons). The tickets, codes, or coupons may be redeemed at the appropriate physical or online premises and their content reviewed to verify their eligibility (e.g., assurances that the right of admission or use is redeemed at the correct time and place, for the correct dollar amount, and for the correct number of uses). Product delivery: Will call, COBO (in Care Of Box Office), or "at the door" pickup: The patron picks up pre-purchased tickets for an event, such as a play, sporting event, or concert, either just before the event or in advance. With the onset of the Internet and e-commerce sites, which allow customers to buy tickets online, the popularity of this service has increased. Shopping cart systems: Simple shopping cart systems allow the off-line administration of products and categories. The shop is then generated as HTML files and graphics that can be uploaded to a webspace. The systems do not use an online database. A high-end solution can be bought or rented as a stand-alone program or as an addition to an enterprise resource planning program. It is usually installed on the company's web server and may integrate into the existing supply chain so that ordering, payment, delivery, accounting and warehousing can be automated to a large extent. Other solutions allow the user to register and create an online shop on a portal that hosts multiple shops simultaneously from one back office. Examples are BigCommerce, Shopify and FlickRocket. Open source shopping cart packages include advanced platforms such as Interchange, and off-the-shelf solutions such as Magento, osCommerce, WooCommerce, PrestaShop, and Zen Cart. Commercial systems can also be tailored so the shop does not have to be created from scratch. By using an existing framework, software modules for various functionalities required by a web shop can be adapted and combined. Design: Customers are attracted to online shopping not only because of high levels of convenience, but also because of broader selections, competitive pricing, and greater access to information. Business organizations seek to offer online shopping not only because it is of much lower cost compared to bricks and mortar stores, but also because it offers access to a worldwide market, increases customer value, and builds sustainable capabilities. Design: Information load Designers of online shops are concerned with the effects of information load. Information load is a product of the spatial and temporal arrangements of stimuli in the web store. Compared with conventional retail shopping, the information environment of virtual shopping is enhanced by providing additional product information such as comparative products and services, as well as various alternatives and attributes of each alternative, etc. Two major dimensions of information load are complexity and novelty. Complexity refers to the number of different elements or features of a site, often the result of increased information diversity. Novelty involves the unexpected, suppressed, new, or unfamiliar aspects of the site. The novelty dimension may keep consumers exploring a shopping site, whereas the complexity dimension may induce impulse purchases. Design: Consumer needs and expectations Internet consumers are self-conscious and emphasize personalized consumption, which makes the demand for online consumption different. Online consumers have different needs depending on their time and environment. Even different online consumers have different needs at the same level of demand due to the difference in income level and other factors. Compared with the centralized nature of traditional markets, online consumption is more decentralized. In the online consumer market, consumers have a short decision time, a large variability of consumer demand, a large number of purchases, but a relatively small amount of each purchase, a considerable mobility of purchases, a strong substitutability of goods, and a large elasticity of demand. According to the output of a research report by Western Michigan University published in 2005, an e-commerce website does not have to be good looking with listing on a lot of search engines. It must build relationships with customers to make money. The report also suggests that a website must leave a positive impression on the customers, giving them a reason to come back. However, resent research has proven that sites with higher focus on efficiency, convenience, and personalised services increased the customers motivation to make purchases. Design: Dyn, an Internet performance management company conducted a survey on more than 1400 consumers across 11 countries in North America, Europe, Middle-East and Asia and the results of the survey are as follows: Online retailers must improve the website speed Online retailers must ease consumers fear around securityThese concerns majorly affect the decisions of almost two thirds of the consumers. Design: User interface The most important factors determining whether customers return to a website are ease of use and the presence of user-friendly features. Usability testing is important for finding problems and improvements in a web site. Methods for evaluating usability include heuristic evaluation, cognitive walkthrough, and user testing. Each technique has its own characteristics and emphasizes different aspects of the user experience. Market share: The popularity of online shopping continues to erode sales of conventional retailers. For example, Best Buy, the largest retailer of electronics in the U.S. in August 2014 reported its tenth consecutive quarterly dip in sales, citing an increasing shift by consumers to online shopping. Amazon.com has the largest market share in the United States. As of May 2018, a survey found two-thirds of Americans had bought something from Amazon (92% of those who had bought anything online), with 40% of online shoppers buying something from Amazon at least once a month. The survey found shopping began at amazon.com 44% of the time, compared to a general search engine at 33%. It estimated 75 million Americans subscribe to Amazon Prime and 35 million more use someone else's account.There were 242 million people shopping online in China in 2012. For developing countries and low-income households in developed countries, adoption of e-commerce in place of or in addition to conventional methods is limited by a lack of affordable Internet access. Advantages: Convenience Online stores are usually available 24 hours a day, and many consumers in Western countries have Internet access both at work and at home. Other establishments such as Internet cafes, community centers and schools provide internet access as well. In contrast, visiting a conventional retail store requires travel or commuting and costs such as gas, parking, or bus tickets, and must usually take place during business hours. Delivery was always a problem which affected the convenience of online shopping. Additionally, the online shopping industry has not only involved the concept of providing convenience for customers but also improved perceptions of social inclusion. However to overcome this many retailers including online retailers in Taiwan brought in a store pick up service. This now meant that customers could purchase goods online and pick them up at a nearby convenience store, making online shopping more advantageous to customers. In the event of a problem with the item (e.g., the product was not what the consumer ordered or the product was not satisfactory), consumers are concerned with the ease of returning an item in exchange for the correct product or a refund. Consumers may need to contact the retailer, visit the post office and pay return shipping, and then wait for a replacement or refund. Some online companies have more generous return policies to compensate for the traditional advantage of physical stores. For example, the online shoe retailer Zappos.com includes labels for free return shipping, and does not charge a restocking fee, even for returns which are not the result of merchant error. (Note: In the United Kingdom, online shops are prohibited from charging a restocking fee if the consumer cancels their order in accordance with the Consumer Protection (Distance Selling) Act 2000). A 2018 survey in the United States found 26% of online shoppers said they never return items, and another 65% said they rarely do so. Merchants may benefit from online shopping due to low sales inventory pressure, low operating costs, and the scale of operation is not limited by the site. Advantages: Delivery Especially in cases of large or heavy products, delivery can be not only more convenient but also not require having or using a car. Not using or depending on personal vehicles, which can have substantial impact on the environment, to travel to local stores can make online shopping more sustainable than buying in local stores if such are used otherwise (especially if items are bundled and delivery vehicles are electric and use optimized routes). Moreover, the pace of urbanization, local delivery systems, and internet connectivity which facilitate the delivery process are the major determinants of e-commerce adoption, Information and reviews Online shopping is usually more informationally rich than shopping at physical stores traveled to and usually has higher comparability and customizability.Online stores must describe products for sale with text, photos, and multimedia files, and sometimes have features such as question and answers or filters, whereas in a physical retail store, the actual product and the manufacturer's packaging will be available for direct inspection (which might involve a test drive, fitting, or other experimentation). Some online stores provide or link to supplemental product information, such as instructions, safety procedures, demonstrations, or manufacturer specifications. Some provide background information, advice, or how-to guides designed to help consumers decide which product to buy. Some stores even allow customers to comment or rate their items. There are also dedicated review sites that host user reviews for different products. Reviews and even some blogs give customers the option of shopping for cheaper purchases from all over the world without having to depend on local retailers. In a conventional retail store, clerks are generally available to answer questions. Some online stores have real-time chat features, but most rely on e-mails or phone calls to handle customer questions. Even if an online store is open 24 hours a day, seven days a week, the customer service team may only be available during regular business hours. It also implies that geographical factors, rather than socioeconomic issues, must be addressed in order to improve online shopping acceptance.[2] Price and selection One advantage of shopping online is being able to quickly seek out deals for items or services provided by many different vendors (though some local search engines do exist to help consumers locate products for sale in nearby stores). Search engines, online price comparison services and discovery shopping engines can be used to look up sellers of a particular product or service. Shipping costs (if applicable) reduce the price advantage of online merchandise, though depending on the jurisdiction, a lack of sales tax may compensate for this. Shipping a small number of items, especially from another country, is much more expensive than making the larger shipments bricks-and-mortar retailers order. Some retailers (especially those selling small, high-value items like electronics) offer free shipping on sufficiently large orders. Another major advantage for retailers is the ability to rapidly switch suppliers and vendors without disrupting users' shopping experience. Disadvantages: Fraud and security concerns Given the lack of ability to inspect merchandise before purchase, consumers are at higher risk of fraud than face-to-face transactions. When ordering merchandise online, the item may not work properly, it may have defects, or it might not be the same item pictured in the online photo. Merchants also risk fraudulent purchases if customers are using stolen credit cards or fraudulent repudiation of the online purchase. However, merchants face less risk from physical theft by using a warehouse instead of a retail storefront. Secure Sockets Layer (SSL) encryption has generally solved the problem of credit card numbers being intercepted in transit between the consumer and the merchant. However, one must still trust the merchant (and employees) not to use the credit card information subsequently for their own purchases, and not to pass the information to others. Also, hackers might break into a merchant's web site and steal names, addresses and credit card numbers, although the Payment Card Industry Data Security Standard is intended to minimize the impact of such breaches. Identity theft is still a concern for consumers. A number of high-profile break-ins in the 2000s has prompted some U.S. states to require disclosure to consumers when this happens. Computer security has thus become a major concern for merchants and e-commerce service providers, who deploy countermeasures such as firewalls and anti-virus software to protect their networks. Phishing is another danger, where consumers are fooled into thinking they are dealing with a reputable retailer, when they have actually been manipulated into feeding private information to a system operated by a malicious party. Denial of service attacks are a minor risk for merchants, as are server and network outages. Disadvantages: Quality seals can be placed on the Shop web page if it has undergone an independent assessment and meets all requirements of the company issuing the seal. The purpose of these seals is to increase the confidence of online shoppers. However, the existence of many different seals, or seals unfamiliar to consumers, may foil this effort to a certain extent. Disadvantages: A number of resources offer advice on how consumers can protect themselves when using online retailer services. These include: Sticking with well-known stores, or attempting to find independent consumer reviews of their experiences; also ensuring that there is comprehensive contact information on the website before using the service, and noting if the retailer has enrolled in industry oversight programs such as a trust mark or a trust seal. Disadvantages: Before buying from a new company, evaluating the website by considering issues such as: the professionalism and user-friendliness of the site; whether or not the company lists a telephone number and/or street address along with e-contact information; whether a fair and reasonable refund and return policy is clearly stated; and whether there are hidden price inflators, such as excessive shipping and handling charges. Disadvantages: Ensuring that the retailer has an acceptable privacy policy posted. For example, note if the retailer does not explicitly state that it will not share private information with others without consent. Ensuring that the vendor address is protected with SSL (see above) when entering credit card information. If it does the address on the credit card information entry screen will start with "HTTPS". Disadvantages: Using strong passwords which do not contain personal information such as the user's name or birthdate. Another option is a "pass phrase," which might be something along the lines: "I shop 4 good a buy!!" These are difficult to hack, since they do not consist of words found in a dictionary, and provides a variety of upper, lower, and special characters. These passwords can be site specific and may be easy to remember.Although the benefits of online shopping are considerable, when the process goes poorly it can create a thorny situation. A few problems that shoppers potentially face include identity theft, faulty products, and the accumulation of spyware. If users are required to put in their credit card information and billing/shipping address and the website is not secure, customer information can be accessible to anyone who knows how to obtain it. Most large online corporations are inventing new ways to make fraud more difficult. However, criminals are constantly responding to these developments with new ways to manipulate the system. Even though online retailers are making efforts to protect consumer information, it is a constant fight to maintain the lead. It is advisable to be aware of the most current technology and scams to protect consumer identity and finances. Product delivery is also a main concern of online shopping. Most companies offer shipping insurance in case the product is lost or damaged. Some shipping companies will offer refunds or compensation for the damage, but this is up to their discretion. Disadvantages: Lack of full cost disclosure The lack of full cost disclosure may also be problematic. While it may be easy to compare the base price of an item online, it may not be easy to see the total cost up front. Additional fees such as shipping are often not visible until the final step in the checkout process. The problem is especially evident with cross-border purchases, where the cost indicated at the final checkout screen may not include additional fees that must be paid upon delivery such as duties and brokerage. Some services such as the Canadian-based Wishabi attempts to include estimates of these additional cost, but nevertheless, the lack of general full cost disclosure remains a concern. Disadvantages: Privacy Privacy of personal information is a significant issue for some consumers. Many consumers wish to avoid spam and telemarketing which could result from supplying contact information to an online merchant. In response, many merchants promise to not use consumer information for these purposes, Many websites keep track of consumer shopping habits in order to suggest items and other websites to view. Brick-and-mortar stores also collect consumer information. Some ask for a shopper's address and phone number at checkout, though consumers may refuse to provide it. Many larger stores use the address information encoded on consumers' credit cards (often without their knowledge) to add them to a catalog mailing list. This information is obviously not accessible to the merchant when paying in cash or through a bank (money transfer, in which case there is also proof of payment). Product suitability: Many successful purely virtual companies deal with digital products, (including information storage, retrieval, and modification), music, movies, office supplies, education, communication, software, photography, and financial transactions. Other successful marketers use drop shipping or affiliate marketing techniques to facilitate transactions of tangible goods without maintaining real inventory. Some non-digital products have been more successful than others for online stores. Profitable items often have a high value-to-weight ratio, they may involve embarrassing purchases, they may typically go to people in remote locations, and they may have shut-ins as their typical purchasers. Items which can fit in a standard mailbox—such as music CDs, DVDs and books—are particularly suitable for a virtual marketer. Product suitability: Products such as spare parts, both for consumer items like washing machines and for industrial equipment like centrifugal pumps, also seem good candidates for selling online. Retailers often need to order spare parts specially, since they typically do not stock them at consumer outlets—in such cases, e-commerce solutions in spares do not compete with retail stores, only with other ordering systems. A factor for success in this niche can consist of providing customers with exact, reliable information about which part number their particular version of a product needs, for example by providing parts lists keyed by serial number. Products less suitable for e-commerce include products that have a low value-to-weight ratio, products that have a smell, taste, or touch component, products that need trial fittings—most notably clothing—and products where colour integrity appears important. Nonetheless, some web sites have had success delivering groceries and clothing sold through the internet is big business in the U.S. Aggregation: High-volume websites, such as Yahoo!, Amazon.com and eBay offer hosting services for online stores to all size retailers. These stores are presented within an integrated navigation framework, sometimes known as virtual shopping malls or online marketplaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protonema** Protonema: A protonema (plural: protonemata) is a thread-like chain of cells that forms the earliest stage of development of the gametophyte (the haploid phase) in the life cycle of mosses. When a moss first grows from a spore, it starts as a germ tube, which lengthens and branches into a filamentous complex known as a protonema, which develops into a leafy gametophore, the adult form of a gametophyte in bryophytes.Moss spores germinate to form an alga-like filamentous structure called the protonema. It represents the juvenile gametophyte. While the protonema is growing by apical cell division, at some stage, under the influence of the phytohormone cytokinin, buds are induced which grow by three-faced apical cells. These give rise to gametophores, stems and leaf like structures. Bryophytes do not have true leaves (megaphylls). Protonemata are characteristic of all mosses and some liverworts but are absent from hornworts. Protonema: Protonemata of mosses are composed of two cell types: chloronemata, which form upon germination, and caulonemata, which later differentiate from chloronemata and on which buds are formed, which then differentiate to gametophores.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kaoss Pad** Kaoss Pad: The Kaoss Pad is an audio sampling instrument and multi-effects processor originally launched by Korg in 1999. It allows users to record and process audio samples and apply various effects using an X-Y touchscreen. Features: Kaoss Pads allow users to sample and loop audio and apply effects such as pitch-bending, flange, distortion, and delay using an X/Y touchscreen.According to the Guardian, while its effects technology was not new, the Kaoss Pad was distinguished by its intuitive design: "Anyone can pick one up and in a matter of seconds get the hang of it." The British producer and musician Brian Eno described it as "a way of taking sounds into the domain of muscular control" as opposed to working with computers: "It takes you into a completely different place, because when working with computers you normally don't use your muscles in that way. You're focused on your head, and the three million years of evolution that resulted in incredible muscular skill doesn't get a look in." Users: Radiohead use a Kaoss Pad on performances of their 2000 song "Everything In Its Right Place", manipulating singer Thom Yorke's vocals into a "glitching, stuttering collage". Other users include Brian Eno, Enter Shikari, the Muse guitarist Matt Bellamy (who has Kaoss Pads built into his guitars), John Linnell of They Might Be Giants, Bryan Ferry, Beardyman, Kevin Martin, and New York based electronic musician Ian Cook, who often uses the device for live resampling in a jazz/improvisation context, notably with Travis Sullivan’s Bjorkestra, violinist Lucia Micarelli, and Jason Miles’ Global Noise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Algorithmic mechanism design** Algorithmic mechanism design: Algorithmic mechanism design (AMD) lies at the intersection of economic game theory, optimization, and computer science. The prototypical problem in mechanism design is to design a system for multiple self-interested participants, such that the participants' self-interested actions at equilibrium lead to good system performance. Typical objectives studied include revenue maximization and social welfare maximization. Algorithmic mechanism design differs from classical economic mechanism design in several respects. It typically employs the analytic tools of theoretical computer science, such as worst case analysis and approximation ratios, in contrast to classical mechanism design in economics which often makes distributional assumptions about the agents. It also considers computational constraints to be of central importance: mechanisms that cannot be efficiently implemented in polynomial time are not considered to be viable solutions to a mechanism design problem. This often, for example, rules out the classic economic mechanism, the Vickrey–Clarke–Groves auction. History: Noam Nisan and Amir Ronen first coined "Algorithmic mechanism design" in a research paper published in 1999.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ocular albinism late onset sensorineural deafness** Ocular albinism late onset sensorineural deafness: Ocular albinism late onset sensorineural deafness (OASD) is a rare, X-linked recessive disease characterized by intense visual impairments, reduced retinal pigments, translucent pale-blue irises and moderately severe hearing loss from adolescence to middle-age. It is a subtype of Ocular Albinism (OA) that is linked to Ocular albinism type I (OA1). OA1 is the most common form of ocular albinism, affecting at least 1/60,000 males. Ocular albinism late onset sensorineural deafness: OA has two patterns of inheritance: X-linked and autosomal. X-linked OA includes OA1 (Nettleship-Falls type), OA2 (Forsius-Eriksson type) and OASD. Autosomal inheritance, on the other hand, includes OCA3 (autosomal recessive OA) and OA with sensorineural deafness. As OA1 is X-linked, males are generally more affected than females. The cause of OASD is believed to involve mutations of the GPR143 gene, which is responsible for pigment proteins production and melanosome growth control. This gene is located on the X-chromosome at Xp22.3, which is also where TBL1 gene is found. The physical proximity of the two genes suggest that OASD and OA1 result from contiguous-gene syndrome. There are three main diagnostic methods: molecular genetic tests, family pedigree analysis and antenatal diagnosis. While there is no definite treatment for OASD, annual ophthalmologic examinations are suggested for preventative measures. Signs and Symptoms: The main signs and symptoms (phenotypes) of OASD, shown in different parts of the body, include the following: Nystagmus: involuntary, rapid rhythmic eye movements (HPO ID: 000639) Sensorineural hearing impairment (HPO ID: 0001107) Visual impairment: impaired vision, eyesight loss (HPO ID: 0000613) Photophobia: light hypersensitivity (HPO ID: 0000407) Ocular albinism: absent pigmentation in the eye (HPO ID: 0000505)Although symptoms vary between individuals, the signs and symptoms mentioned above all have the Human Phenotype Ontology (HPO) frequency of 90% and are categorized as “very frequent (80-99%)”. Strabismus, leading to squint eyes, and cross-eyed individuals, are shown by 30-79% of people who have OASD. The HPO has collected information on symptoms demonstrated in medical resources. The HPO ID helps access more information about the symptom. Signs and Symptoms: However, there are additional possible signs and symptoms of OASD that are not listed by HPO such as the following: Adult-onset sensorineural hearing impairment Albinism Depigmented fundus Giant melanosomes in melanocytes Nystagmus-induced head nodding X-linked inheritance Causes: Genetic mutations OA is a recessive X-linked disorder; hence, the disorder is located on the X chromosome. It is mainly due to mutations in the GPR143 gene(OA1), located at Xp22.3; this gene gives instructions for the production of protein involved in pigmentation of the eyes and skin. It helps control the growth of melanosomes, which are cellular structures that produce and store a pigment called melanin. Melanin is the substance that gives skin, hair, and eyes their color; it also plays a role in normal vision in the retina. Causes: G protein-coupled receptor (GPCR) A G protein-coupled receptor, which is the protein product, is localized on the membrane of melanosomes in pigmented cells in the eye. There is an identical gene mutated for congenital nystagmus 6 (OMIM code: 300814). Although OASD results from mutations in the Xp22.3 region, it may vary depending on individuals. Some may have additional mutations (usually microdeletions) in the contiguous genes TBL1X and SHROOM2. Mutations in GNA13 (17q24.1), activated by OA1, have also been reported to cause the ocular albinism phenotype. These genetic changes can stop the protein from reaching melanosomes to control their growth, or the protein’s function is disrupted although the protein reaches melanosomes successfully. Consequently, melanosomes in both the skin cells and retina enlarge abnormally. Researchers have ambiguous interpretations on how these macromelanosomes have correlation to vision loss and other eye abnormalities in patients suffering ocular albinism. Causes: Other While OASD is largely due to mutations, another possible cause is contiguous gene defects that include OA1 gene. Due to the presence of several forms of X-linked hearing loss, the gene responsible for sensorineural deafness could also map to the same region. Pathogenesis: OASD belongs in the subtype of OA1 and thus has similar pathogenesis with OA1. The physical proximity of genes related to OASD (GPR143 gene and TBL1 gene) made researchers postulate that OASD and OA1 are the result of a contiguous-gene syndrome. In other words, OASD may be explained by gene pleiotropy of OA1; hence, OA1 and OASD are two separate, yet commonly associated entities. Pathogenesis: GPR143 Gene Surace et al.(2000) conducted animal studies on CD1 albino female mice to investigate the pathogenesis of OA1. They found that the expression of OA1 gene (GPR143 gene) is observed to be stronger after birth and is maintained until adulthood. Nevertheless, a decrease is observed after adulthood. This finding suggests that OA1 gene expression is controlled by transcription factors that play crucial roles in the formation and pigmentation of melanosomes. Pathogenesis: Abnormal Growth of Melanosomes Incerti et al. (2000) attempt to better understand the pathogenesis of OA1 using knockout mice with inactivated OA1 genes. OA1 gene was made inactive with gene targeting technique. Ultrastructural analysis was performed on the melanosomes found in the retinal pigment epithelium (RPE) in both normal and knockout mice. Only normal-sized melanosomes were found in the wild-type controls. In knockout mice, on the other hand, abnormally large melanosomes were also found in addition to the normal-sized melanosomes. However, no melanosomes of intermediate sizes were observed.Moreover, a single core region within the giant pigmented melanosome that resembles the structure of a normal membrane-free melanosome was identified. Together, the evidence suggest that the formation of macromelanosomes is due to the abnormal growth of individual melanosomes, rather than the fusion of multiple melanosomes. This disagrees with the previous model on macromelanosomes formation. It stated that the separation failure of pre-melanosomes from the endoplasmic reticulum, accumulating structural proteins and enzymes, caused progressive organelle distension. The present theory suggests that the normal melanosomes and macromelanosomes have similar pathways of formation and maturation processes. This offered new insight into the pathogenesis as OA1 gene appears to have a crucial role in the final stages of melanosome development and maturation. Eye examination of the adult knockout mice revealed significantly lighter coloured pigmented in the RPE than the normal control mice. Pathogenesis: TBL1 Gene OA1 gene is not the only gene involved in the pathogenesis of OASD. TBL1 gene is also identified to affect the hearing phenotype of OASD. TBL1 is located in the Xp22.3 loci which is of close proximity to the OA1 gene, on the telomeric side outside its critical region. Genomic analysis by Bassi et al.(1999) found that this gene is either partially or completely deleted in patients carrying the Xp22.3 terminal deletions. One patient with deletions of both TBL1 gene and OA1 gene displayed OA1 phenotype as well as associated late-onset sensorineural deafness. Hence, it could be inferred that TBL1 gene is involved in the pathogenesis of OASD. Pathogenesis: Unusual Optic Pathways Furthermore, in the mutant mice with inactivated OA1 gene, the size of the uncrossed optic pathways was reduced in favour of the cross pathways. The retinofugal pathway, projecting from the eye towards the brain, of the knockout mice also displayed a misrouting of optic fibres. Such misrouting led to the loss of stereoscopic vision in patients with ocular albinism. The mutant mice also had a significantly smaller mean ipsilateral volume of terminal label in their lateral geniculate nucleus (LGN) than the normal control mice, as shown with the Mann-Whitney test at 5% significance level. Yet, this difference is independent of the LGN volume as the size of the nucleus was comparable in both groups. However, the absence of macromelanosome in the chiasmic region, where the final retinal axon enters, illustrates that there is no direct causation between the formation of macromelanosomes and abnormalities in the optic pathways. Diagnosis: There are three main types of diagnosis methods: Molecular genetic tests are composed of sequence analyses of the entire coding region and deletion/duplication analysis. GPR143 gene mutation is found in most affected males. Diagnosis: Family pedigree analysis Antenatal diagnosis: Prenatal testing can be used when women are known carriers of the GPR143 gene mutation, using chorionic villus sampling or amniocentesis. Preimplantation genetic diagnosis may be available. As an alternative, the expression of OA1 has been reported to be detected exclusively after birth.Female obligate carriers have patchy streaks of pigment in the mid-peripheral retina (pigmentary mosaicism), which demonstrates random X inactivation. Those with more advanced ocular manifestations due to skewed X-inactivation have reported reduced visual acuity and nystagmus. Despite the fact that melanin macro-globules are not pathognomonic, they are commonly seen during skin biopsy.As patients with OASD demonstrate severe hearing loss in their fourth or fifth decade of life, an audiometry test is recommended in the fourth decade. Physical examinations such as otoscopic examination may give uncertain results as the patient may have hearing loss in spite of having no structural abnormalities. Treatment: There is no definite treatment for the cure of OASD. Thus, annual ophthalmologic examinations are advised for patients below the age of 16, and every 2-3 years above 16. Treatments are targeted towards specific symptoms. Refractive problems are dealt with by the use of corrective glasses with tinted lenses for those with photophobia. Additional low vision aids and special education may be needed. Extraocular muscle surgery can help restore peripheral visual fusion fields for eye alignment and improve head posture; this helps relieve strabismus and nystagmus. Conditions linked to Ocular Albinism 1: OASD is one of the many contiguous gene syndromes related to OA1. Contiguous gene syndromes are due to the deletion or duplication of a group of genes physically clustered together. Many patients with contiguous gene syndromes that involves the OA1 gene being deleted in the Xp22.3 region have been described to have albino phenotypes. Some of the other conditions of close linkage to type I ocular albinism are of the following: X-linked ichthyosis Kallmann syndrome X-linked recessive chondrodysplasia punctata Microphthalmia with linear skin defects (MLS) This condition is characterised by microphthalmia, corneal opacities, and patches of erythematous skin in the head and neck of female patients. The gene involved is located very closely to the OA1 gene loci on the X chromosome. Conditions linked to Ocular Albinism 1: Aicardi Syndrome, characterised by retinal lacunae, absence of corpus callosum and seizures Goltz Syndrome, which is characterised by microphthalmia, focal dermal hypoplasia, as well as skeletal abnormalities
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**High Fidelity Pure Audio Blu-ray** High Fidelity Pure Audio Blu-ray: High Fidelity Pure Audio, occasionally abbreviated as HFPA, is a marketing initiative, spearheaded by Sony Music Universal Music Group, for audio-only Blu-ray optical discs. Launched in 2013 as a potential successor to the compact disc (CD), it has been compared with DVD-Audio and SACD, which had similar aims.HFPA is encoded as 24-bit/96 kHz or 24-bit/192 kHz linear PCM ("high-resolution audio"), optionally losslessly compressed with Dolby TrueHD or DTS-HD Master Audio.HFPA discs are compatible with existing Blu-Ray players.Pure Audio Blu-ray refers to a different initiative (but with some goals in common) launched by msm-studios in Germany in 2009.As of November 2019, Deutsche Grammophon is the most prolific publisher on the format, with Beethoven 250 having three Blu-Ray audio discs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fach** Fach: The German Fach system (German pronunciation: [fax]; literally "compartment" or "subject of study", here in the sense of "vocal specialization") is a method of classifying singers, primarily opera singers, according to the range, weight, and color of their voices. It is used worldwide, but primarily in Europe, especially in German-speaking countries and by repertory opera houses.The Fach system is a convenience for singers and opera houses. It prevents singers from being asked to sing roles which they are incapable of performing. Opera companies keep lists of available singers by Fach so that when they are casting roles for an upcoming production, they do not inadvertently contact performers who would be inappropriate for the part.Below is a list of Fächer (German pronunciation: [ˈfɛçɐ]), their ranges as written on sheet music, and roles generally considered appropriate to each. When two names for the Fach are given, the first is in more common use today. Where possible, an English and/or Italian equivalent of each Fach is listed; however, not all Fächer have ready English or Italian equivalents. Note that some roles can be sung by more than one Fach and that many singers do not easily fit into a Fach: for instance some sopranos may sing both Koloratursopran and Dramatischer Koloratursopran roles. In addition, roles traditionally more difficult to cast may be given to a voice other than the traditional Fach. For instance, the "Queen of the Night" and "Violetta" are more traditionally dramatic coloratura roles, but it is difficult to find a dramatic coloratura to sing it (particularly given the extreme range). Therefore, these roles are often sung by a lyric coloratura. Soprano Fächer: Lyrischer Koloratursopran / Koloratursoubrette English equivalent: coloratura soprano or lyric coloratura soprano Range: From about middle C (C4) to the F two-and-a-half octaves above middle C (F6) Description: Usually (but not always) a light soprano who has a high voice. Can often have small voices lacking the richness and resonance of a dramatic soprano. Must be able to do fast acrobatics with easy high notes. Many have extremely high ranges (with notes above the F of the "Queen of the Night"), but there are also singers in this Fach who do not regularly sing higher than the high E♭6. Soprano Fächer: Roles: Adina, L'elisir d'amore (Gaetano Donizetti) Aminta, Die schweigsame Frau (Richard Strauss) Blonde, Die Entführung aus dem Serail (Wolfgang Amadeus Mozart) Cunegonde, Candide (Leonard Bernstein) Frasquita, Carmen (Bizet) Juliette, Roméo et Juliette (Gounod) Marie, La fille du régiment (Gaetano Donizetti) Olympia, Les contes d'Hoffman (Jacques Offenbach) Oscar, Un ballo in maschera (Giuseppe Verdi) Zerbinetta, Ariadne auf Naxos (Richard Strauss) Dramatischer Koloratursopran English equivalent: Dramatic coloratura soprano Range: From about middle C (C4) to the F two and a half octaves above middle C (F6) Description: The same as above, only with a more dramatic, rich voice. Often heavier and more lyrical than a coloratura soprano. Must also be able to do fast vocal acrobatics and reach high notes, such as the F6 of the "Queen of the Night". Soprano Fächer: Roles: Abigalle, Nabucco (Giuseppe Verdi) Donna Anna, Don Giovanni (Wolfgang Amadeus Mozart) Elvira, I puritani (Vincenzo Bellini) Fiordiligi, Così fan tutte (Wolfgang Amadeus Mozart) Gilda, Rigoletto (Giuseppe Verdi) Konstanze, Die Entführung aus dem Serail (Wolfgang Amadeus Mozart) Leonora, Il trovatore (Giuseppe Verdi) Norma, Norma (Vincenzo Bellini) Odabella, Attila (Giuseppe Verdi) The Queen of the Night, Die Zauberflöte (Wolfgang Amadeus Mozart) Violetta, La traviata (Giuseppe Verdi)One must not mistake the Mozartian dramatic coloratura soprano with the Italian dramatic coloratura soprano. A singer that sings Konstanze, Donna Anna or Fiordiligi can not necessarily sing the Italian dramatic coloratura parts, due to other vocal demands. Imogene, Leonora and Violetta require a dramatic soprano voice and are most often sung by dramatic sopranos with an agile voice that can easily produce coloratura and high notes. Roles like Norma, Lady Macbeth, Odabella or Abigaille are good examples of Italian roles that are not necessarily a coloratura soprano (even though the score calls for coloratura singing), but a full bodied dramatic soprano with a voice that can handle extreme dramatic singing and that is flexible enough to sing coloratura. Giuseppe Verdi wrote many parts like this in his early years. Soprano Fächer: Deutsche Soubrette / Charaktersopran English equivalent: Soubrette Range: From about middle C (C4) to the C two octaves above middle C (C6) Description: A beautiful, sweet light lyric voice usually capable of executing florid passages similarly to that of a coloratura. The range is usually intermediate between that of a coloratura and lyric soprano. Most sopranos start out as soubrettes, changing fach as the voice matures. Soprano Fächer: Roles: Adele, Die Fledermaus (J. Strauss II) Barbarina, Le nozze di Figaro (Wolfgang Amadeus Mozart) Clotilde, Norma (Vincenzo Bellini) Despina, Così fan tutte (Wolfgang Amadeus Mozart) Echo, Ariadne auf Naxos (Richard Strauss) Papagena, Die Zauberflöte (Wolfgang Amadeus Mozart) Servilia, La clemenza di Tito (Wolfgang Amadeus Mozart) Sophie, Der Rosenkavalier (Richard Strauss) Susanna, Le nozze di Figaro (Wolfgang Amadeus Mozart) Zerlina, Don Giovanni (Wolfgang Amadeus Mozart) Lyrischer Sopran English equivalent: lyric soprano Range: From about B below middle C (B3) to the C two octaves above middle C (C6) Description: A more supple soprano, capable of legato, portamento, and some agility; generally has a more soulful and sensuous quality than a soubrette, who tends to be largely flirtatious and somewhat tweety. The voice is very common; thus the purity and character of the basic timbre is essential. It is the "basic" soprano voice which is at neither extremes of the soprano range of voices; it is not known for having particular vocal attributes such as power, stamina, technical prowess, or agility. However, there are several lyric sopranos that possess a quantity of many of these vocal attributes, thus allowing them to sing a broader variety of roles. Nevertheless, the core of the true fundamentally lyric voice does not encompass such traits. Innocence, vulnerability and pathos are usually conveyed in the music written for the characters portrayed by the lyric soprano because of this endearing simplicity. This fach is also famous because the voices usually remain especially fresh until advanced age. Soprano Fächer: Roles: Antonia, Les contes d'Hoffman (Jacques Offenbach) Gretel, Hänsel und Gretel (Engelbert Humperdinck) Lauretta, Gianni Schicchi (Giacomo Puccini) Liù, Turandot (Giacomo Puccini) Musetta, La Boheme (Giacomo Puccini) Micaëla, Carmen (Georges Bizet) Pamina, Die Zauberflöte (Wolfgang Amadeus Mozart) Rusalka, Rusalka (Antonín Dvořák) Sophie, Werther (Jules Massenet) Susanna, Le nozze di Figaro (Wolfgang Amadeus Mozart) Jugendlich dramatischer Sopran English equivalent: lyric dramatic soprano Range: From about A below middle C (A3) to the C two octaves above middle C (C6) Description: The Italian version of this fach is the spinto, which literally translated means pushed. However this is not accurate in terms of these singers' vocal production. A lyric dramatic soprano has a lyric instrument that can also create big sounds, cutting through an orchestral or choral climax. This voice is sometimes referred to as a "young" or "youthful" dramatic soprano although this term doesn't necessarily refer to the singer's age but rather to the tonal quality of the voice. This fach is more clearly delineated in the German system than in the American system. Depending on the singer, however, this voice type can be versatile, as it lies at neither extreme of the soprano spectrum. Spintos are occasionally able to take on lighter mezzo roles or conversely, lyric and even coloratura roles. Soprano Fächer: Roles: Agathe, Der Freischütz (Carl Maria von Weber) Amelia, Un ballo in maschera (Giuseppe Verdi) Chrysothemis, Elektra (Richard Strauss) Cio-Cio San, Madama Butterfly (Giacomo Puccini) Donna Elvira, Don Giovanni (Wolfgang Amadeus Mozart) Elisabeth, Tannhäuser (Richard Wagner) Elsa, Lohengrin (Richard Wagner) Maddalena, Andrea Chénier (Umberto Giordano) Magda Sorel, The Consul (Gian Carlo Menotti) Marie, Wozzeck (Alban Berg) Marie/Marietta, Die tote Stadt (Erich Wolfgang Korngold) Suor Angelica, Suor Angelica (Giacomo Puccini) Dramatischer Sopran English equivalent: full dramatic soprano Range: From about the A below middle C (A3) to the C two octaves above middle C (C6) Description: Characterized by their rich, full sounding voices, dramatic sopranos are expected to project across large orchestras, a feat that requires a powerful sound. Dramatic sopranos are not expected to have the vocal flexibility of the lighter Fächer. Although most dramatic sopranos have a darker, more robust quality to the voice, there are some that possess a lighter lyrical tone. In these instances, however, the substantial amount of volume and endurance normally associated with the dramatic soprano voice is still present. The darker voiced dramatic soprano may even make a foray into the dramatic mezzo-soprano territory with great success. Soprano Fächer: Roles: Ariadne, Ariadne auf Naxos (Richard Strauss) Cassandre, Les Troyens (Hector Berlioz) Elektra, Elektra (Richard Strauss) La Gioconda, La Gioconda (Amilcare Ponchielli) Leonore, Fidelio (Ludwig van Beethoven) Minnie, La fanciulla del West (Giacomo Puccini) Santuzza, Cavalleria rusticana (Pietro Mascagni) Sieglinde, Die Walküre (Wagner) Tosca, Tosca (Giacomo Puccini) Turandot, Turandot (Giacomo Puccini) Hochdramatischer Sopran English equivalent: High dramatic soprano Range: From about the F below middle C (F3) to the C two octaves above middle C (C6) Description: A voice capable of answering the demands of operas of Wagner's maturity. The voice is substantial, very powerful, and even throughout the registers. It is immense, stentorian and even larger than the voice of the "normal" dramatic soprano. Although the two voices are comparable and are sometimes hard to distinguish between, this voice has even greater stamina, endurance and volume than the former. The top register is very strong, clarion and bright. Successful hochdramatische are rare. Soprano Fächer: Roles: Desdemona, Otello (Giuseppe Verdi) Eva, Die Meistersinger von Nürnberg (Richard Wagner) Helmwige, Die Walküre (Richard Wagner) Giulietta, Les contes d'Hoffman (Jacques Offenbach) Liza, The Queen of Spades (Pyotr Ilyich Tchaikovsky) Marschallin, Der Rosenkavalier (Richard Strauss) Mimi, La bohème (Giacomo Puccini) Salome, Salome (Richard Strauss) Woglinde, Das Rheingold and Götterdämmerung (Richard Wagner) The woodbird, Siegfried (Richard Wagner) Mezzo-soprano Fächer: Koloratur-Mezzosopran English equivalent: coloratura mezzo-soprano Range: From about the G below middle C (G3) to the B two octaves above middle C (B5) Description: Found especially in Rossini's operas, these roles were written originally for altos with agility and secure top notes. Today they are often played by mezzo-sopranos and sometimes even by sopranos. At times a lyric or full lyric soprano with a flexible voice will assume the roles as written while a true coloratura soprano will sing the same music transposed to a higher key. Mezzo-soprano Fächer: Roles: Angelina, La Cenerentola (Gioachino Rossini) Griselda, Griselda (Vivaldi) Isabella, L'italiana in Algeri (Gioachino Rossini) Isolier, Le comte Ory (Rossini) Julius Caesar, Giulio Cesare (Handel) Orsini, Lucrezia Borgia (Donizetti) Romeo, I Capuleti e i Montecchi (Vincenzo Bellini) Ruggiero, Alcina (Handel) Rosina, Il barbiere di Siviglia (Gioachino Rossini) Tancredi, Tancredi (Gioacchino Rossini) Lyrischer Mezzosopran / Spielalt Range: From about the G below middle C (G3) to the B two octaves above middle C (B5) English equivalent: lyric mezzo-soprano Description: A lyric mezzo soprano's instrument in a lower range; the resulting sound is less piercing, more lachrymose and rather sensitive. The voices are similar, giving rise to the term 'short soprano' i.e. a soprano without the highest notes. In fact, many lyric mezzos with strong extensions to their upper vocal registers make the transition to singing as sopranos at some point in their careers. Mezzo-soprano Fächer: Roles: Charlotte, Werther (Massenet) Cherubino, Le nozze di Figaro (Wolfgang Amadeus Mozart) Dorabella, Così fan tutte (Wolfgang Amadeus Mozart) Hänsel, Hänsel und Gretel (Engelbert Humperdinck) Marguerite, La damnation de Faust (Berlioz) Meg, Little Women (Mark Adamo) Mignon, Mignon (Ambroise Thomas) Nicklausse, Les contes d'Hoffman (Offenbach) Mother, Amahl and the Night Visitors (Menotti) Suzuki, Madama Butterfly (Giacomo Puccini) Dramatischer Mezzosopran English equivalent: dramatic mezzo-soprano Range: From about the G below middle C (G3) to the B two octaves above middle C (B5) Description: Dramatic mezzo-sopranos have ranges very similar to a dramatic soprano. The main difference is the endurance and ease in which the two voice-types sing – a mezzo will concentrate singing most of the time in her middle and low registers and will go up to notes like high B-flat only at the dramatic climax. Consequently, many dramatic mezzo-sopranos have success in singing some dramatic soprano roles that are written with a lower tessitura. Mezzo-soprano Fächer: Roles: Amneris, Aida (Giuseppe Verdi) Dido, Les Troyens (Hector Berlioz) The Composer, Ariadne auf Naxos (Richard Strauss) Dalila, Samson et Dalila (Camille Saint-Saëns) Eboli, Don Carlo (Giuseppe Verdi) Fricka, Das Rheingold, Die Walküre (Richard Wagner) Gertrud, Hänsel und Gretel (Engelbert Humperdinck) Klytaemnestra, Elektra (Richard Strauss) Octavian, Der Rosenkavalier (Richard Strauss) Ortrud, Lohengrin (Richard Wagner) Contralto Fächer: Dramatischer Alt English equivalent: dramatic contralto Range: From about the F below middle C (F3) to the G or A two octaves above (G–A5) Description: Stylistically similar to the dramatic mezzo, just lower. Sings usually around the break between the chest voice and middle voice. Many mezzos tried their luck in these roles, yet real altos fare better. A deep, penetrating low female voice. This is a very rare voice type with a darker, richer sound than that of a typical alto. Contralto Fächer: Roles: Azucena, Il trovatore (Verdi) Carmen, Carmen (Bizet) La Cieca, La Gioconda (Ponchielli) Dryade, Ariadne auf Naxos (Richard Strauss) Erda, Das Rheingold, Siegfried (Wagner) Maddalena, Rigoletto (Verdi) A norn, Götterdämmerung (Wagner) Polina, The Queen of Spades (Tchaikovsky) Schwertleite, Die Walküre (Wagner) Ulrica, Un ballo in maschera (Verdi) Tiefer Alt English equivalent: low contralto Range: From about the E below middle C (E3) to the E two octaves above (E5) Description: A low female voice. Contralto Fächer: Roles: Anna, Les Troyens (Hector Berlioz) Annina, Der Rosenkavalier (Richard Strauss) Antonia's mother, Tales of Hoffmann (Jacques Offenbach) Bradamante, Alcina (Handel) Didone, Egisto (Cavalli) Gaea, Daphne (Richard Strauss) Geneviève, Pelléas et Mélisande (Claude Debussy) Hippolyta, A Midsummer Night's Dream (Britten) Marthe, Faust (Charles Gounod) Zita, Gianni Schicchi (Giacomo Puccini) Tenor Fächer: Spieltenor / Tenor buffo English equivalent: (lyric) comic tenor. It is quite possible for a young Spieltenor to eventually work into the lighter lyrischer Tenor category; the deciding factor will be the beauty of voice. Range: From about low C (C3) to the B an octave above middle C (B4) Roles: Monostatos, Die Zauberflöte (Wolfgang Amadeus Mozart) Pedrillo, Die Entführung aus dem Serail (Wolfgang Amadeus Mozart) Charaktertenor English equivalent: character tenor; must have good acting abilities. Tenor Fächer: Range: From about the B below low C (B2) to the C an octave above middle C (C5) Lyrischer Tenor English equivalent: lyric tenor Range: From about low C (C3) to the C an octave above middle C (C5) Roles: Alfredo, La traviata (Giuseppe Verdi) Almaviva, Il barbiere di Siviglia (Gioachino Rossini) Belmonte, Die Entführung aus dem Serail (Wolfgang Amadeus Mozart) Don Ottavio, Don Giovanni (Wolfgang Amadeus Mozart) Il Duca, Rigoletto (Giuseppe Verdi) Lindoro, L'italiana in Algeri (Gioachino Rossini) Nemorino, L'elisir d'amore (Gaetano Donizetti) Ramiro, La Cenerentola (Gioachino Rossini) Tamino, Die Zauberflöte (Wolfgang Amadeus Mozart) Jugendlicher Heldentenor English equivalent: lyric dramatic tenor also known as Spinto Range: From about low C (C3) to the C an octave above middle C (C5) Description: A tenor with a dramatic extended upper range with the necessary brightness to come through the orchestra's texture. Tenor Fächer: Roles: Calaf, Turandot (Giacomo Puccini) Canio, Pagliacci (Ruggero Leoncavallo) Cavaradossi, Tosca (Giacomo Puccini) Dick Johnson, La fanciulla del West (Giacomo Puccini) Don Alvaro, La forza del destino (Giuseppe Verdi) Don José, Carmen (Georges Bizet) Florestan, Fidelio (Ludwig van Beethoven) Idomeneo, Idomeneo (Wolfgang Amadeus Mozart) Lohengrin, Lohengrin (Richard Wagner) Manrico, Il trovatore (Giuseppe Verdi) Max, Der Freischütz (Carl Maria von Weber) Radamès, Aida (Giuseppe Verdi) Siegmund, Die Walküre (Richard Wagner) Heldentenor English equivalent: heroic tenor or dramatic tenor Range: From about the B below low C (B2) to the C above middle C (C5) Description: A full dramatic tenor with baritonal facility in the middle range and the brightness necessary to pierce a thick orchestral texture. Tenor Fächer: Roles: Otello, Otello (Giuseppe Verdi) Siegfried, Der Ring des Nibelungen (Richard Wagner) Tristan, Tristan und Isolde (Richard Wagner) Tannhäuser, Tannhäuser (Richard Wagner) Walther von Stolzing, Die Meistersinger (Richard Wagner) Baritone Fächer: Bariton / Baryton-Martin Italian: baritono leggero English equivalent: light baritone Range: From the low C (C3) to the B above middle C (B4) Description: The Baryton-Martin, named after Jean-Blaise Martin (sometimes referred to as Light Baritone) lacks the lower G2–B2 range a heavier baritone is capable of, and has a lighter, almost tenor-like quality. Lyrischer Bariton / Spielbariton Italian: baritono lirico English equivalent: lyric baritone Range: From about the B below low C (B2) to the A♭ above middle C (A♭4) Description: A sweet, mild sounding baritone voice, lacking harshness. Many lyric baritone roles call for some fioritura and coloratura, a beautiful line, as well as a charismatic presence. Baritone Fächer: Roles: Albert, Werther (Jules Massenet) Belcore, L'elisir d'amore (Gaetano Donizetti) Billy Budd, Billy Budd (Benjamin Britten) Dottore Malatesta, Don Pasquale (Gaetano Donizetti) Figaro, Il barbiere di Siviglia (Gioachino Rossini) Guglielmo, Così fan tutte (Wolfgang Amadeus Mozart) Papageno, Die Zauberflöte (Wolfgang Amadeus Mozart) Kavalierbariton Italian: baritono cantabile English equivalent: cavalier baritone Range: From about the A below low C (A2) to the G♯ above middle C (G♯4) Description: A metallic voice, that can sing both lyric and dramatic phrases, a manly noble baritonal color; most on-stage roles in this Fach call for good looks. Not quite as vocally powerful as the Verdi baritone or Charakterbariton, who is expected to have a powerful, perhaps muscular or physically large, appearance on stage and has a harsher, more pronounced than the lyric baritone or Spielbariton. Baritone Fächer: Roles: Conte Almaviva, Le nozze di Figaro (Wolfgang Amadeus Mozart) Count, Capriccio (Richard Strauss) Don Giovanni, Don Giovanni (Wolfgang Amadeus Mozart) Escamillo, Carmen (Georges Bizet) Ford, Falstaff (Giuseppe Verdi) Lescaut, Manon Lescaut (Giacomo Puccini) Lord Enrico Ashton, Lucia di Lammermoor (Gaetano Donizetti) Marcello, La bohème (Giacomo Puccini) Onegin, Eugene Onegin (Pyotr Ilyich Tchaikovsky) Rodrigo de Posa, Don Carlo (Giuseppe Verdi) Sharpless, Madama Butterfly (Giacomo Puccini) Valentin, Faust (Charles Gounod) Wolfram von Eschenbach, Tannhäuser (Richard Wagner) Charakterbariton Italian: baritono verdiano English equivalent: Verdi baritone Range: From about the A below low C (A2) to the G♯ above middle C (G♯4) Description: A voice particularly effective with passages in its higher reaches. A high tessitura vis-a-vis the range extremes. A Verdi baritone refers to a voice capable of singing consistently and with ease in the highest part of the baritone range, sometimes extending up to the C above middle C (C5 or high C). The Verdi baritone will generally have a lot of squillo, or "ping" Roles: Scarpia, Tosca (Giacomo Puccini) Wozzeck (title role) (Alban Berg) Heldenbariton Italian: baritono drammatico English equivalent: dramatic baritone Range: From about the G below low C (G2) to the F♯ above middle C (F♯4) Description: Means 'heroic baritone'. In the German opera houses a true Heldenbariton is a prized possession: a singer with exciting power and an authoritative mature sound and production. Baritone Fächer: Roles: Alfio, Cavalleria rusticana (Pietro Mascagni) Amonasro, Aida (Giuseppe Verdi) Dr Schön / Jack the Ripper, Lulu (Alban Berg) Ezio, Attila (Giuseppe Verdi) Gérard, Andrea Chénier (Umberto Giordano) Iago, Otello (Giuseppe Verdi) Jack Rance, La fanciulla del West (Giacomo Puccini) Jochanaan, Salome (Richard Strauss) Macbeth (title role) (Giuseppe Verdi) Michele, Il tabarro (Giacomo Puccini) Don Pizarro, Fidelio (Ludwig van Beethoven) Simon Boccanegra (title role) (Giuseppe Verdi) Telramund, Lohengrin (Richard Wagner) Tonio, Pagliacci (Ruggero Leoncavallo) Lyrischer Bassbariton / Low lyric baritone English equivalent: lyric bass-baritone Range: From about the G below low C (G2) to the F♯ above middle C (F♯4) Description: The bass-baritone's required range can vary tremendously based on the role, with some less demanding than others. Some bass-baritones are baritones, while others are basses. Baritone Fächer: Dramatischer Bassbariton / Low dramatic baritone English equivalent: dramatic bass-baritone Range: From about the G below low C (G2) to the F♯ above middle C (F♯4) Roles: Wotan, Die Walküre (Wagner) Bass Fächer: Basso cantante / Lyric bass-bariton / High lyric bass English equivalent: lyric bass-baritone or singing bass Range: From about the E below low C (E2) to the F above middle C (F4) Basso cantante means 'singing bass'. Bass Fächer: Hoher Bass / Dramatic bass-baritone / High dramatic bass English equivalent: dramatic bass-baritone Range: From about the E below low C (E2) to the F above middle C (F4) Jugendlicher Bass English equivalent: young bass Range: From about the E below low C (E2) to the F above middle C (F4) Description: A young man (regardless of the age of the singer). Bass Fächer: Spielbass / Bassbuffo / Lyric buffo English equivalent: lyric comic bass Range: From about the E half an octave below low C (E2) to the F above middle C (F4) Roles: Don Alfonso, Così fan tutte (Wolfgang Amadeus Mozart) Don Pasquale, Don Pasquale (Gaetano Donizetti) Dottor Dulcamara, L'elisir d'amore (Gaetano Donizetti) Don Bartolo, Il barbiere di Siviglia (Gioachino Rossini) Don Magnifico, La Cenerentola (Gioachino Rossini) Leporello, Don Giovanni (Wolfgang Amadeus Mozart) The Sacristan Tosca (Giacomo Puccini) Schwerer Spielbass / Dramatic buffo English equivalent: dramatic comic bass Range: From about the C two octaves below middle C (C2) to the F above middle C (F4) Roles: Baron Ochs auf Lerchenau, Der Rosenkavalier (Richard Strauss) Daland, Der fliegende Holländer (Richard Wagner) Méphistophélès, Faust (Charles Gounod) Lyrischer seriöser Bass English equivalent: low bass. Italian: basso profondo. Bass Fächer: Range: From about the C two octaves below middle C (C2) to the F above middle C (F4) Basso profondo is the lowest bass voice type. According to J. B. Steane in Voices, Singers, and Critics, the basso profondo voice "derives from a method of tone-production that eliminates the more Italian quick vibrato. In its place is a kind of tonal solidity, a wall-like front, which may nevertheless prove susceptible to the other kind of vibrato, the slow beat or dreaded wobble." Roles: Don Bartolo, Le nozze di Figaro (Wolfgang Amadeus Mozart) Fiesco, Simon Boccanegra (Giuseppe Verdi) Padre Guardiano, La forza del destino (Giuseppe Verdi) Pimen, Boris Godunov (Modest Mussorgsky) Rocco, Fidelio (Ludwig van Beethoven) Sarastro, Die Zauberflöte (Wolfgang Amadeus Mozart) Sir Morosus, Die schweigsame Frau (Richard Strauss) Sparafucile, Rigoletto (Giuseppe Verdi) Dramatischer seriöser Bass English equivalent: dramatic low bass. Dramatic basso profundo is a powerful basso profundo voice. Bass Fächer: Range: From about the C two octaves below middle C (C2) to the F above middle C (F4) Roles: Il Commendatore (Don Pedro), Don Giovanni (Wolfgang Amadeus Mozart) Fafner, Das Rheingold, Siegfried (Richard Wagner) The Grand Inquisitor, Don Carlos (Giuseppe Verdi) Gurnemanz, Titurel, Parsifal (Richard Wagner) Heinrich, Lohengrin (Richard Wagner) Marke, Tristan und Isolde (Richard Wagner)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fixed end moment** Fixed end moment: The fixed end moments are reaction moments developed in a beam member under certain load conditions with both ends fixed. A beam with both ends fixed is statically indeterminate to the 3rd degree, and any structural analysis method applicable on statically indeterminate beams can be used to calculate the fixed end moments. Examples: In the following examples, clockwise moments are positive. Examples: The two cases with distributed loads can be derived from the case with concentrated load by integration. For example, when a uniformly distributed load of intensity q is acting on a beam, then an infinitely small part dx distance x apart from the left end of this beam can be seen as being under a concentrated load of magnitude qdx . Then, 12 12 Where the expressions within the integrals on the right hand sides are the fixed end moments caused by the concentrated load qdx For the case with linearly distributed load of maximum intensity q0 20 30
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surface Neo** Surface Neo: The Surface Neo is an unreleased dual-touchscreen 2-in-1 PC that was unveiled by Microsoft on October 2, 2019. Slated to be part of the Microsoft Surface family of devices, the Surface Neo was designed to be used in various "postures" for different use cases and multitasking scenarios involving its screens, and feature Windows 10X—a variant of Windows 10 designed exclusively for dual-screen devices. Surface Neo: The Surface Neo was expected to be launched in late-2020, alongside a range of other Windows 10X devices from third-party manufacturers. In May 2020, Microsoft postponed the release of Windows 10X-based dual-screen devices in favor of launching it with single-screen devices instead. This did not come to fruition, and Microsoft eventually cancelled Windows 10X outright in May 2021, with aspects of 10X repurposed for the mainstream Windows 11. Its sister device, the Android-based Surface Duo, was released in September 2020. Background: Microsoft had first envisioned a dual-touchscreen device with its Courier concept, while rumors surfaced in 2017 of a similar project codenamed "Andromeda"—a foldable device which would use electronic paper displays.During a Surface hardware event on October 2, 2019, Microsoft unveiled a pair of dual-touchscreen devices—the Android-based Surface Duo smartphone, and the Surface Neo. Codenamed "Santorini", head of Windows Client Experiences Joe Belfiore explained that "we saw an opportunity both at Microsoft and with our partners to fill in some of the gaps in [laptop and tablet] experiences and offer something new". Microsoft unveiled an accompanying operating system known as Windows 10X, and stated that the OS, as well as dual-screen devices from Microsoft and other OEM partners, would be released in late-2020. Specifications: Hardware Microsoft did not provide any specific information on the specifications of the Surface Neo, except that all Windows 10X devices launching in 2020 would use Intel "Lakefield" processors. Prototypes of the Neo reportedly used an Intel Core i5-L16G7 processor with 8 GB of RAM, and had dual 9-inch 1440p displays. Specifications: Software The Surface Neo was planned to run Windows 10X, an edition of Windows 10 designed exclusively for dual-touchscreen and folding devices. It featured noticeable changes to the Windows user interface, including a centered taskbar, and an updated Start menu using a grid of pinned applications (as opposed to Windows 10's existing "live tiles"). It also contained architectural differences, including having Win32 software running within a sandbox environment for security reasons, and to control power consumption (with apps automatically paused after a period of inactivity).The OS was designed to respond to various "postures", such as spanning an application across both screens like a book, using separate applications on the two screens, standing up with a wireless external keyboard ("portable all-in-one"), or like a laptop—with the lower screen dedicated to a virtual keyboard or partially covered by a physical keyboard accessory. In both cases, the portion of the lower screen not used by the keyboard would contain the "Wonder Bar" (comparable to the "touch bar" of MacBook Pro laptops of the era), which could be used for functions such as an emoji picker, note taking, or video. It could also be used like a laptop touchpad.Vice President of Experiences and Devices Joe Belfiore stated that Windows 10X was "evolving the core of Windows 10", and that Microsoft was "working to take the best of the applications that people need and use most — things like Mail, Calendar, and PowerPoint — and bring them over to dual screens in a way that creates flexible and rich experiences that are unique to this OS and devices". Aborted release: In May 2020, amid the COVID-19 pandemic, Microsoft changed its plans for Windows 10X; chief product officer for Microsoft Windows and Office Panos Panay announced that Microsoft and its partners planned to launch Windows 10X with only single-screen devices, stating that "we need to focus on meeting customers where they are now", and would "continue to look for the right moment, in conjunction with our OEM partners, to bring dual-screen devices to market". However, this did not occur, and Windows 10X was ultimately cancelled the following year. Aspects of the OS were eventually incorporated into Windows 11.Microsoft released the Surface Neo's Android counterpart, the Surface Duo, in September 2020, but the Neo was quietly shelved, with no confirmation from Microsoft beyond the postponement of dual-screen devices and the cancellation of its planned operating system. Reports from testers indicated that the Neo felt cramped to use in a laptop posture due to its 9-inch displays, and that the prototype models were prone to overheating.In a 2022 interview, Microsoft chief product officer Panos Panay stated of devices such as the Surface Neo that "Whether it’s two screens or a foldable, I do think these are realities to the future of products being made, no doubt. Or a rollable for that matter, a rollable screen. It’s maybe not something I’ve decided on, but for sure how do we serve the form factor that’s going to adapt to the person I think is the way to think about it."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Game viewer vehicle** Game viewer vehicle: A game viewing vehicle or safari vehicle is an off-road vehicle that is converted or modified to carry many people, seated in positions in which they can view game in game reserves. These vehicles are usually open and do not have roofs, the reason for this is to improve visibility and keep obstructions out of the way. These vehicles are usually four wheel drive and have a load area on the back, which when converted will house the seats. Game viewer vehicle: There are many companies that make these modifications, they are either bought vehicles that have been taken to a conversion company or they are bought by the company, converted, then sold. Some vehicle manufacturers sell these vehicles ready-made. Game viewer vehicle: These vehicles are usually only seen in countries where there are game reserves, usually in East and Southern Africa. Common variations of vehicles converted for game viewers are the Land Rover Defender, Toyota LandCruiser 70 Series and other common off-road vehicles. These are the ideal vehicles as they are made to carry 9 passengers besides the ranger. This allows interaction and personalized attention during a game drive. The larger game lodges in the public game reserves like the Kruger National Park and Pilanesberg National Park in South Africa operate modified trucks which can seat 25 passengers. Electric safari vehicles: Safari companies began introducing electric safari vehicles some time in the mid 2010s. There is some discrepancy on which company and country first introduced an EV for safaris, but many high-end lodges have switched to exclusively using solar-powered vehicles, including boats. In addition to Land Rover and Toyota, Rivian manufactures EV for safaris.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conformational isomerism** Conformational isomerism: In chemistry, conformational isomerism is a form of stereoisomerism in which the isomers can be interconverted just by rotations about formally single bonds (refer to figure on single bond rotation). While any two arrangements of atoms in a molecule that differ by rotation about single bonds can be referred to as different conformations, conformations that correspond to local minima on the potential energy surface are specifically called conformational isomers or conformers. Conformations that correspond to local maxima on the energy surface are the transition states between the local-minimum conformational isomers. Rotations about single bonds involve overcoming a rotational energy barrier to interconvert one conformer to another. If the energy barrier is low, there is free rotation and a sample of the compound exists as a rapidly equilibrating mixture of multiple conformers; if the energy barrier is high enough then there is restricted rotation, a molecule may exist for a relatively long time period as a stable rotational isomer or rotamer (an isomer arising from hindered single-bond rotation). When the time scale for interconversion is long enough for isolation of individual rotamers (usually arbitrarily defined as a half-life of interconversion of 1000 seconds or longer), the isomers are termed atropisomers (see: atropisomerism). The ring-flip of substituted cyclohexanes constitutes another common form of conformational isomerism. Conformational isomerism: Conformational isomers are thus distinct from the other classes of stereoisomers (i. e. configurational isomers) where interconversion necessarily involves breaking and reforming of chemical bonds. For example, L/D- and R/S- configurations of organic molecules have different handedness and optical activities, and can only be interconverted by breaking one or more bonds connected to the chiral atom and reforming a similar bond in a different direction or spatial orientation. They also differ from geometric (cis/trans) isomers, another class of stereoisomers, which require the π-component of double bonds to break for interconversion. (Although the distinction is not always clear-cut, since certain bonds that are formally single bonds actually have double bond character that becomes apparent only when secondary resonance contributors are considered, like the C–N bonds of amides, for instance.) Due to rapid interconversion, conformers are usually not isolable at room temperature. Conformational isomerism: The study of the energetics between different conformations is referred to as conformational analysis. It is useful for understanding the stability of different isomers, for example, by taking into account the spatial orientation and through-space interactions of substituents. In addition, conformational analysis can be used to predict and explain product selectivity, mechanisms, and rates of reactions. Conformational analysis also plays an important role in rational, structure-based drug design. Types: Rotating their carbon–carbon bonds, the molecules ethane and propane have three local energy minima. They are structurally and energetically equivalent, and are called the staggered conformers. For each molecule, the three substituents emanating from each carbon–carbon bond are staggered, with each H–C–C–H dihedral angle (and H–C–C–CH3 dihedral angle in the case of propane) equal to 60° (or approximately equal to 60° in the case of propane). The three eclipsed conformations, in which the dihedral angles are zero, are transition states (energy maxima) connecting two equivalent energy minima, the staggered conformers. Types: The butane molecule is the simplest molecule for which single bond rotations result in two types of nonequivalent structures, known as the anti- and gauche-conformers (see figure). Types: For example, butane has three conformers relating to its two methyl (CH3) groups: two gauche conformers, which have the methyls ±60° apart and are enantiomeric, and an anti conformer, where the four carbon centres are coplanar and the substituents are 180° apart (refer to free energy diagram of butane). The energy difference between gauche and anti is 0.9 kcal/mol associated with the strain energy of the gauche conformer. The anti conformer is, therefore, the most stable (≈ 0 kcal/mol). The three eclipsed conformations with dihedral angles of 0°, 120°, and 240° are transition states between conformers. Note that the two eclipsed conformations have different energies: at 0° the two methyl groups are eclipsed, resulting in higher energy (≈ 5 kcal/mol) than at 120°, where the methyl groups are eclipsed with hydrogens (≈ 3.5 kcal/mol).While simple molecules can be described by these types of conformations, more complex molecules require the use of the Klyne–Prelog system to describe the different conformers.More specific examples of conformational isomerism are detailed elsewhere: Ring conformation Cyclohexane conformations, including with chair and boat conformations among others. Types: Cycloalkane conformations, including medium rings and macrocycles Carbohydrate conformation, which includes cyclohexane conformations as well as other details. Allylic strain – energetics related to rotation about the single bond between an sp2 carbon and an sp3 carbon. Atropisomerism – due to restricted rotation about a bond. Folding, including the secondary and tertiary structure of biopolymers (nucleic acids and proteins). Akamptisomerism – due to restricted inversion of a bond angle. Free energy and equilibria of conformational isomers: Equilibrium of conformers Conformational isomers exist in a dynamic equilibrium, where the relative free energies of isomers determines the population of each isomer and the energy barrier of rotation determines the rate of interconversion between isomers: K=e−ΔG∘/RT, where K is the equilibrium constant, ΔG° is the difference in standard free energy between the two conformers in kcal/mol, R is the universal gas constant (1.987×10−3 kcal/mol K), and T is the system's temperature in kelvins. In units of kcal/mol at 298 K, 10 1.36 kcal /mol). Free energy and equilibria of conformational isomers: Thus, every 1.36 kcal/mol corresponds to a factor of about 10 in term of equilibrium constant at temperatures around room temperature. (The "1.36 rule" is useful in general for estimation of equilibrium constants at room temperature from free energy differences. At lower temperatures, a smaller energy difference is needed to obtain a given equilibrium constant.) Three isotherms are given in the diagram depicting the equilibrium distribution of two conformers at different temperatures. At a free energy difference of 0 kcal/mol, this gives an equilibrium constant of 1, meaning that two conformers exist in a 1:1 ratio. The two have equal free energy; neither is more stable, so neither predominates compared to the other. A negative difference in free energy means that a conformer interconverts to a thermodynamically more stable conformation, thus the equilibrium constant will always be greater than 1. For example, the ΔG° for the transformation of butane from the gauche conformer to the anti conformer is −0.47 kcal/mol at 298 K. This gives an equilibrium constant is about 2.2 in favor of the anti conformer, or a 31:69 mixture of gauche:anti conformers at equilibrium. Conversely, a positive difference in free energy means the conformer already is the more stable one, so the interconversion is an unfavorable equilibrium (K < 1). Even for highly unfavorable changes (large positive ΔG°), the equilibrium constant between two conformers can be increased by increasing the temperature, so that the amount of the less stable conformer present at equilibrium increases (although it always remains the minor conformer). Free energy and equilibria of conformational isomers: Population distribution of conformers The fractional population distribution of different conformers follows a Boltzmann distribution: total =e−Ei/RT∑k=1Me−Ek/RT. Free energy and equilibria of conformational isomers: The left hand side is the proportion of conformer i in an equilibrating mixture of M conformers in thermodynamic equilibrium. On the right side, Ek (k = 1, 2, ..., M) is the energy of conformer k, R is the molar ideal gas constant (approximately equal to 8.314 J/(mol·K) or 1.987 cal/(mol·K)), and T is the absolute temperature. The denominator of the right side is the partition function. Free energy and equilibria of conformational isomers: Factors contributing to the free energy of conformers The effects of electrostatic and steric interactions of the substituents as well as orbital interactions such as hyperconjugation are responsible for the relative stability of conformers and their transition states. The contributions of these factors vary depending on the nature of the substituents and may either contribute positively or negatively to the energy barrier. Computational studies of small molecules such as ethane suggest that electrostatic effects make the greatest contribution to the energy barrier; however, the barrier is traditionally attributed primarily to steric interactions. Free energy and equilibria of conformational isomers: In the case of cyclic systems, the steric effect and contribution to the free energy can be approximated by A values, which measure the energy difference when a substituent on cyclohexane in the axial as compared to the equatorial position. In large (>14 atom) rings, there are many accessible low-energy conformations which correspond to the strain-free diamond lattice. Isolation or observation of conformational isomers: The short timescale of interconversion precludes the separation of conformational isomers in most cases. Atropisomers are conformational isomers which can be separated due to restricted rotation. The equilibrium between conformational isomers can be observed using a variety of spectroscopic techniques. Isolation or observation of conformational isomers: Protein folding also generates stable conformational isomers which can be observed. The Karplus equation relates the dihedral angle of vicinal protons to their J-coupling constants as measured by NMR. The equation aids in the elucidation of protein folding as well as the conformations of other rigid aliphatic molecules. Protein side chains exhibit rotamers, whose distribution is determined by their steric interaction with different conformations of the backbone. This is evident from statistical analysis of the conformations of protein side chains in the Backbone-dependent rotamer library. Isolation or observation of conformational isomers: In cyclohexane derivatives, the two chair conformers interconvert with rapidly at room temperature, with cyclohexane itself undergoing the ring-flip at a rates of approximately 105 ring-flips/sec, with an overall energy barrier of 10 kcal/mol (42 kJ/mol), which precludes their separation at ambient temperatures. However, at low temperatures below the coalescence point one can directly monitor the equilibrium by NMR spectroscopy and by dynamic, temperature dependent NMR spectroscopy the barrier interconversion.The dynamics of conformational (and other kinds of) isomerism can be monitored by NMR spectroscopy at varying temperatures. The technique applies to barriers of 8–14 kcal/mol, and species exhibiting such dynamics are often called "fluxional". Isolation or observation of conformational isomers: Besides NMR spectroscopy, IR spectroscopy is used to measure conformer ratios. For the axial and equatorial conformer of bromocyclohexane, νCBr differs by almost 50 cm−1. Conformation-dependent reactions: Reaction rates are highly dependent on the conformation of the reactants. In many cases the dominant product arises from the reaction of the less prevalent conformer, by virtue of the Curtin-Hammett principle. This is typical for situations where the conformational equilibration is much faster than reaction to form the product. The dependence of a reaction on the stereochemical orientation is therefore usually only visible in configurational isomers, in which a particular conformation is locked by substituents. Prediction of rates of many reactions involving the transition between sp2 and sp3 states, such as ketone reduction, alcohol oxidation or nucleophilic substitution is possible if all conformers and their relative stability ruled by their strain is taken into account.One example with configurational isomers is provided by elimination reactions, which involve the simultaneous removal of a proton and a leaving group from vicinal or antiperiplanar positions under the influence of a base. Conformation-dependent reactions: The mechanism requires that the departing atoms or groups follow antiparallel trajectories. For open chain substrates this geometric prerequisite is met by at least one of the three staggered conformers. For some cyclic substrates such as cyclohexane, however, an antiparallel arrangement may not be attainable depending on the substituents which might set a conformational lock. Adjacent substituents on a cyclohexane ring can achieve antiperiplanarity only when they occupy trans diaxial positions (that is, both are in axial position, one going up and one going down). Conformation-dependent reactions: One consequence of this analysis is that trans-4-tert-butylcyclohexyl chloride cannot easily eliminate but instead undergoes substitution (see diagram below) because the most stable conformation has the bulky t-Bu group in the equatorial position, therefore the chloride group is not antiperiplanar with any vicinal hydrogen (it is gauche to all four). The thermodynamically unfavored conformation has the t-Bu group in the axial position, which is higher in energy by more than 5 kcal/mol (see A value). As a result, the t-Bu group "locks" the ring in the conformation where it is in the equatorial position and substitution reaction is observed. On the other hand, cis-4-tert-butylcyclohexyl chloride undergoes elimination because antiperiplanarity of Cl and H can be achieved when the t-Bu group is in the favorable equatorial position. Conformation-dependent reactions: The repulsion between an axial t-butyl group and hydrogen atoms in the 1,3-diaxial position is so strong that the cyclohexane ring will revert to a twisted boat conformation. The strain in cyclic structures is usually characterized by deviations from ideal bond angles (Baeyer strain), ideal torsional angles (Pitzer strain) or transannular (Prelog ) interactions. Alkane stereochemistry: Alkane conformers arise from rotation around sp3 hybridised carbon–carbon sigma bonds. The smallest alkane with such a chemical bond, ethane, exists as an infinite number of conformations with respect to rotation around the C–C bond. Two of these are recognised as energy minimum (staggered conformation) and energy maximum (eclipsed conformation) forms. The existence of specific conformations is due to hindered rotation around sigma bonds, although a role for hyperconjugation is proposed by a competing theory. Alkane stereochemistry: The importance of energy minima and energy maxima is seen by extension of these concepts to more complex molecules for which stable conformations may be predicted as minimum-energy forms. The determination of stable conformations has also played a large role in the establishment of the concept of asymmetric induction and the ability to predict the stereochemistry of reactions controlled by steric effects. Alkane stereochemistry: In the example of staggered ethane in Newman projection, a hydrogen atom on one carbon atom has a 60° torsional angle or torsion angle with respect to the nearest hydrogen atom on the other carbon so that steric hindrance is minimised. The staggered conformation is more stable by 12.5 kJ/mol than the eclipsed conformation, which is the energy maximum for ethane. In the eclipsed conformation the torsional angle is minimised. Alkane stereochemistry: In butane, the two staggered conformations are no longer equivalent and represent two distinct conformers:the anti-conformation (left-most, below) and the gauche conformation (right-most, below). Both conformations are free of torsional strain, but, in the gauche conformation, the two methyl groups are in closer proximity than the sum of their van der Waals radii. The interaction between the two methyl groups is repulsive (van der Waals strain), and an energy barrier results. Alkane stereochemistry: A measure of the potential energy stored in butane conformers with greater steric hindrance than the 'anti'-conformer ground state is given by these values: Gauche, conformer – 3.8 kJ/mol Eclipsed H and CH3 – 16 kJ/mol Eclipsed CH3 and CH3 – 19 kJ/mol.The eclipsed methyl groups exert a greater steric strain because of their greater electron density compared to lone hydrogen atoms. Alkane stereochemistry: The textbook explanation for the existence of the energy maximum for an eclipsed conformation in ethane is steric hindrance, but, with a C-C bond length of 154 pm and a Van der Waals radius for hydrogen of 120 pm, the hydrogen atoms in ethane are never in each other's way. The question of whether steric hindrance is responsible for the eclipsed energy maximum is a topic of debate to this day. One alternative to the steric hindrance explanation is based on hyperconjugation as analyzed within the Natural Bond Orbital framework. In the staggered conformation, one C-H sigma bonding orbital donates electron density to the antibonding orbital of the other C-H bond. The energetic stabilization of this effect is maximized when the two orbitals have maximal overlap, occurring in the staggered conformation. There is no overlap in the eclipsed conformation, leading to a disfavored energy maximum. On the other hand, an analysis within quantitative molecular orbital theory shows that 2-orbital-4-electron (steric) repulsions are dominant over hyperconjugation. A valence bond theory study also emphasizes the importance of steric effects. Alkane stereochemistry: Nomenclature Naming alkanes per standards listed in the IUPAC Gold Book is done according to the Klyne–Prelog system for specifying angles (called either torsional or dihedral angles) between substituents around a single bond: a torsion angle between 0° and ± 90° is called syn (s) a torsion angle between ± 90° and 180° is called anti (a) a torsion angle between 30° and 150° or between –30° and –150° is called clinal (c) a torsion angle between 0° and ± 30° or ± 150° and 180° is called periplanar (p) a torsion angle between 0° and ± 30° is called synperiplanar (sp), also called syn- or cis- conformation a torsion angle between 30° to 90° and –30° to –90° is called synclinal (sc), also called gauche or skew a torsion angle between 90° and 150° or –90° and –150° is called anticlinal (ac) a torsion angle between ± 150° and 180° is called antiperiplanar (ap), also called anti- or trans- conformationTorsional strain or "Pitzer strain" refers to resistance to twisting about a bond. Alkane stereochemistry: Special cases In n-pentane, the terminal methyl groups experience additional pentane interference. Replacing hydrogen by fluorine in polytetrafluoroethylene changes the stereochemistry from the zigzag geometry to that of a helix due to electrostatic repulsion of the fluorine atoms in the 1,3 positions. Evidence for the helix structure in the crystalline state is derived from X-ray crystallography and from NMR spectroscopy and circular dichroism in solution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multi-fragment algorithm** Multi-fragment algorithm: The multi-fragment (MF) algorithm is a heuristic or approximation algorithm for the travelling salesman problem (TSP) (and related problems). This algorithm is also sometimes called the "greedy algorithm" for the TSP. Multi-fragment algorithm: The algorithm builds a tour for the traveling salesman one edge at a time and thus maintains multiple tour fragments, each of which is a simple path in the complete graph of cities. At each stage, the algorithm selects the edge of minimal cost that either creates a new fragment, extends one of the existing paths or creates a cycle of length equal to the number of cities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Euler filter** Euler filter: In computer graphics, an Euler filter is a filter intended to prevent gimbal lock and related discontinuities in animation data sets in which rotation is expressed in terms of Euler angles. These discontinuities are caused by the existence of many-to-one mappings between the Euler angle parameterization of the set of 3D rotations. This allows the data set to flip between different Euler angle combinations which correspond to a single 3D rotation, which, although remaining continuous in the space of rotation, are discontinuous in the Euler angle parameter space. The Euler filter chooses on a sample-by-sample basis between the possible Euler angle representations of each 3D rotation in the data set in such a way as to preserve the continuity of the Euler angle time series, without changing the actual 3D rotations. Euler filtering is available in a number of 3D animation packages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polymer sponge** Polymer sponge: Taking clues from spongy toddler toys that can absorb water and inflate to bigger sizes, scientists at Mayo Clinical Research Centre, Rochester, Minnesota, United States have developed biodegradable polymer grafts that, when surgically placed in damaged vertebrae, intended to grow such that it is just the right size and shape to fix the spinal column.For obvious reasons, any problem with the backbone of a vertebrate is often considered a potential disability which can limit a person's ability to manoeuvre their way around their surroundings, cause a lot of pain and be responsible for mental distress. This has been researched upon by Lichun Lu and Xifeng Liu, scientists from Mayo Clinic's college of medicine, who have developed a novel spinal graft that, once surgically placed in the body, will grow to be just the right size and shape to fix the spinal column. They presented their work at the 251st National Meeting & Exposition of the non-profit organization American Chemical Society (ACS). Problem: Current treatments for spinal tumours have been considered way too expensive and invasive. When cancer metastasizes it predominantly tends to settle in the spinal column. A different approach to replacing harmed vertebrae has been investigated. Polymer sponge researchers were reported to being about to present their work in March 2016 to a meeting of the American Chemical Society (ACS). Solution: Doctors can cut out the infected bone tissue (or flat-out replace it as they did in the Sydney case) but that leaves large gaps in the spine. Normally, doctors would either have to open the chest cavity and access the spine from far side (which entails a lengthy recovery and high probability of complications) or they'd make a small incision in the neck/back and inject expandable titanium rods into the bone gap (which is super expensive because titanium). This new technique combines the easy access and short recovery of the titanium rod method with the low cost of the open chest operation. The use of sponges for the treatment of such problems has long been suggested for obvious reasons. Procedure: Doctors simply cut a small hole in the patient's neck/back and inject a hydrogel polymer into the bone gap much the same way they would a titanium rod. This polymer absorbs fluids from within the wound and grows to fill the gap. Doctors control how far the polymer expands in any specific direction by first inserting a "cage"—basically a pre-expanded shell that the polymer fills in as it spreads. Think of it as the wooden frame that keeps a freshly-poured concrete sidewalk in place until it hardens. Once the polymer fills in the cage, which takes 5 to 10 minutes on average, it will set and harden into a viable prosthetic. From there, surrounding bone tissue grows into and through the polymer, reinforcing and cementing it in place. Process: The sponge-like polymer, polycaprolactone (PCL) shows promise as a medical material that can be used to fill gaps in human bones and serve as a scaffold to promote new bone growth. Injuries, birth defects (such as cleft lip and palates), or the removal of tumors in the case of bone cancer can create gaps in bone that are too large to heal naturally. The gaps may dramatically alter a person's phenotypic appearance when they occur in the head, face, or jaw. Transplant rejection: While there might be a strong possibility that a transplant is rejected, various complications may be averted by the use of techniques like bone marrow transplantation, blood transfusion, T lymphocyte modification and the similar techniques. The scope of polymer sponge in this field is still in its infancy and researchers in the field of biotechnological applications for making the concept available to humans and animals may require more attentive financing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grits** Grits: Grits are a type of porridge made from boiled cornmeal. Hominy grits are a type of grits made from hominy – corn that has been treated with an alkali in a process called nixtamalization, with the pericarp (ovary wall) removed. Cooked in warm salted water or milk and considered a soup. Grits are often served with flavorings as a breakfast dish. Grits can be savory or sweet, with savory seasonings being more common. Grits are similar to other thick maize-based porridges from around the world, such as polenta and mieliepap. The dish originated in the Southern United States but is now available nationwide. Grits are often part of a dinner entrée shrimp and grits, served primarily in the South.The word "grits" is derived from the Old English word grytt, meaning "coarse meal." In the Charleston, South Carolina area, cooked hominy grits were primarily referred to as "hominy" until the 1980s. Origin: The dish originated with the Native American Muscogee tribe using a corn similar to hominy. American colonists learned to make the dish from the Native Americans, and it quickly became an American staple.At that time, the hominy for grits was ground on a stone mill. The ground hominy was passed through screens, the finer sifted material used as grit meal, and the coarser as grits.Three-quarters of the grits sold in the U.S. are bought in the South, in an area stretching from Lower Texas to Washington, D.C., that is sometimes called the "grits belt." The state of Georgia declared grits to be its official prepared food in 2002. A similar bill was introduced in South Carolina to name it the official state food, but it did not advance. Nevertheless, South Carolina still has an entire chapter of legislation dealing exclusively with corn meal and grits. State law in South Carolina requires grits and rice meal to be enriched, similar to the requirement for flour.Grits may be either yellow or white, depending on the color of the corn used. The most common version in supermarkets is "quick" grits, which have the germ and hull removed. Whole kernel grits are sometimes called "speckled". Preparation: Grits are prepared by mixing water or milk and cornmeal and stirring them over heat. Whole grain grits require much longer to become soft than "quick grits." Dishes: Grits are eaten with a wide variety of foods, such as eggs and bacon, fried catfish, shrimp, salmon croquettes, or country ham.Shrimp and grits is a traditional dish in the coastal communities in the South Carolina Lowcountry and Georgia's Lower Coastal Plain.Solidified cooked grits can be sliced and fried in vegetable oil, butter, or bacon grease, or they can first be breaded in beaten egg and bread crumbs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Correspondence (algebraic geometry)** Correspondence (algebraic geometry): In algebraic geometry, a correspondence between algebraic varieties V and W is a subset R of V×W, that is closed in the Zariski topology. In set theory, a subset of a Cartesian product of two sets is called a binary relation or correspondence; thus, a correspondence here is a relation that is defined by algebraic equations. There are some important examples, even when V and W are algebraic curves: for example the Hecke operators of modular form theory may be considered as correspondences of modular curves. Correspondence (algebraic geometry): However, the definition of a correspondence in algebraic geometry is not completely standard. For instance, Fulton, in his book on intersection theory, uses the definition above. In literature, however, a correspondence from a variety X to a variety Y is often taken to be a subset Z of X×Y such that Z is finite and surjective over each component of X. Note the asymmetry in this latter definition; which talks about a correspondence from X to Y rather than a correspondence between X and Y. The typical example of the latter kind of correspondence is the graph of a function f:X→Y. Correspondences also play an important role in the construction of motives (cf. presheaf with transfers).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital morphogenesis** Digital morphogenesis: Digital morphogenesis is a type of generative art in which complex shape development, or morphogenesis, is enabled by computation. This concept is applicable in many areas of design, art, architecture, and modeling. The concept was originally developed in the field of biology, later in geology, geomorphology, and architecture. In architecture, it describes tools and methods for creating forms and adapting them to a known environment.Developments in digital morphogenesis have allowed construction and analysis of structures in more detail than could have been put into a blueprint or model by hand, with structure at all levels defined by iterative algorithms. As fabrication techniques advance, it is becoming possible to produce objects with fractal or other elaborate structures. Notable persons: Alan Turing Neri Oxman Rivka Oxman Birger Ragnvald Sevaldson Reading: Burry, Jane, et al. (2005). 'Dynamical Structural Modeling: A Collaborative Design Exploration', International Journal of Architectural Computing, 3, 1, pp. 27–42 Colabella, Enrica and Soddu, Celestino (1992). http://www.artscience-ebookshop.com/GenerativeArtDesign.htm GENERATIVE ART & DESIGN Theory, Methodology and Projects. Environmental Design of MORPHOGENESIS, Genetic Codes of Artificial (English Version, Argenia Pub. 2020); Il Progetto Ambientale di Morfogenesi (italian version) (Bologna: Progetto Leonardo) De Landa, Manuel (1997). A Thousand Years of Nonlinear History (New York: Zone Books) Feuerstein, Günther (2002). Biomorphic Architecture: Human and Animal Forms in Architecture (Stuttgart; London: Axel Menges) Frazer, John H. (1995). An Evolutionary Architecture, Themes VII (London: Architectural Association) Hensel, Michael and Achim Menges (2008). 'Designing Morpho-Ecologies: Versatility and Vicissitude of Heterogeneous Space', Architectural Design, 78, 2, pp. 102–111 Hensel, Michael, Achim Menges, and Michael Weinstock, eds (2004). Emergence: Morphogenetic Design Strategies, Architectural Design (London: Wiley) Hensel, Michael and Achim Menges (2006). 'Material and Digital Design Synthesis', Architectural Design, 76, 2, pp. 88–95 Hensel, Michael and Achim Menges (2006). 'Differentiation and Performance: Multi-Performance Architectures and Modulated Environments', Architectural Design, 76, 2, pp. 60–69 Hingston, Philip F., Luigi C. Barone, and Zbigniew Michalewicz, eds (2008). Design by Evolution: Advances in Evolutionary Design (Berlin; London: Springer) Kolarevic, Branko (2000). ' Digital Morphogenesis and Computational Architectures', in Proceedings of the 4th Conference of Congreso Iberoamericano de Grafica Digital, SIGRADI 2000 - Construindo (n)o Espaço Digital (Constructing the Digital Space), Rio de Janeiro (Brazil) 25–28 September 2000, ed. by José Ripper Kós, Andréa Pessoa Borde and Diana Rodriguez Barros, pp. 98–103. Reading: Leach, Neil (2009). 'Digital Morphogenesis', Architectural Design, 79, 1, pp. 32–37 Lynn, Greg (1999). Animate Form (New York: Princeton Architectural Press) Lynn, Greg (1998). Folds, Bodies & Blobs: Collected Essays (Bruxelles: La Lettre volée) Menges, Achim (2007). 'Computational Morphogenesis: Integral Form Generation and Materialization Processes', in Proceedings of Em‘body’ing Virtual Architecture: The Third International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2007), 28–30 November 2007, Alexandria, Egypt, ed. by Ahmad Okeil, Aghlab Al-Attili and Zaki Mallasi, pp. 725–744 Menges, Achim (2006). 'Polymorphism', Architectural Design, 76, 2, pp. 78–87 Ottchen, Cynthia (2009). 'The Future of Information Modelling and the End of Theory: Less is Limited, More is Different', Architectural Design, 79, 2, pp. 22–27 Prusinkiewicz, Przemyslaw, and Aristid Lindenmayer (2004). The Algorithmic Beauty of Plants (New York: Springer-Verlag) Sabin, Jenny E. and Peter Lloyd Jones (2008). 'Nonlinear Systems Biology and Design: Surface Design', in Proceedings of the 28th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Silicon + Skin: Biological Processes and Computation, Minneapolis 16–19 October 2008, ed. by Andrew Kudless, Neri Oxman and Marc Swackhamer, pp. 54–65 Sevaldson, Birger (2005). Developing Digital Design Techniques: Investigations on Creative Design Computing (PhD, Oslo School of Architecture) Sevaldson, Birger (2000). 'Dynamic Generative Diagrams', in Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process. 18th eCAADe Conference Proceedings, ed. by Dirk Donath (Weimar: Bauhaus Universität), pp. 273–276 Steadman, Philip (2008). The Evolution of Designs: Biological Analogy in Architecture and the Applied Arts (New York: Routledge) Tierney, Therese (2007). Abstract Space: Beneath the Media Surface (Oxon: Taylor & Francis), p. 116 Weinstock, Michael (2006). 'Self-Organisation and the Structural Dynamics of Plants', Architectural Design, 76, 2, pp. 26–33 Weinstock, Michael (2006). 'Self-Organisation and Material Constructions', Architectural Design, 76, 2, pp. 34–41
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Social comparison theory** Social comparison theory: Social comparison theory, initially proposed by social psychologist Leon Festinger in 1954, centers on the belief that there is a drive within individuals to gain accurate self-evaluations. The theory explains how individuals evaluate their own opinions and abilities by comparing themselves to others in order to reduce uncertainty in these domains, and learn how to define the self. Comparing oneself to others socially is a form of measurement and self assessment to identify where an individual stands according to their own set of standards and emotions about themselves.Following the initial theory, research began to focus on social comparison as a way of self-enhancement, introducing the concepts of downward and upward comparisons and expanding the motivations of social comparisons. Social comparison can be traced back to the pivotal paper by Herbert Hyman, back in 1942. Hyman revealed the assessment of one's own status is dependent on the group with whom one compares oneself. The social comparison theory is the belief that media influence, social status, and other forms of competitiveness can affect our self-esteem and mood. In turn, this can affect individuals outlook on themselves and how they fit in with others. Leon Festinger: Leon Festinger was an American psychologist who developed the concept of social comparison theory. Festinger was born in New York City on May 8, 1919. Festinger had an interest in science which led him to pursue a career in psychology. He received his bachelor's degree from the City College and went on to Iowa State University for his master's degree and Ph.D. which he received in 1942. Leon Festinger made his mark in social psychology by teaching the importance of scientific experimentation while challenging the influence of behaviorism and the affects it has. Initial framework: In the theory, Festinger provided nine main hypotheses: First, he stated that humans have a basic drive to evaluate their opinions and abilities and that people evaluate themselves through objective, nonsocial means (Hypothesis I). Second, Festinger stated that if objective, nonsocial means were not available, that people evaluate their opinions and abilities by comparison to other people (Hypothesis II). Next, he hypothesized that the tendency to compare oneself to another person decreases as the difference between their opinions and abilities becomes more divergent. In other words, if someone is much different from you, you are less likely to compare yourself to that person (Hypothesis III). He next hypothesized that there is a unidirectional drive upward in the case of abilities, which is largely absent in opinions. This drive refers to the value that is placed on doing better and better. (Hypothesis IV). Initial framework: Next, Festinger hypothesizes that there are non-social restraints that make it difficult or even impossible to change one's ability and these restraints are largely absent for opinions. In other words, people can change their opinions when they want to but no matter how motivated individuals may be to improve their ability, there may be other elements that make this impossible (Hypothesis V). Initial framework: Festinger goes on to hypothesize that the cessation of comparison with others is accompanied by hostility or derogation to the extent that continued comparison with those persons implies unpleasant consequences (Hypothesis VI). Initial framework: Next, any factors which increase the importance of some particular group as a comparison group from some particular opinion or ability will increase the pressure toward uniformity concerning that ability or opinion within that group. If discrepancies arise between the evaluator and comparison group there is a tendency to reduce the divergence by either attempting to persuade others, or changing their personal views to attain uniformity. However, the importance, relevance and attraction to a comparison group that affects the original motivation for comparison, mediates the pressures towards uniformity (Hypothesis VII). Initial framework: His next hypothesis states that if persons who are very divergent from one's own opinion or ability are perceived as different from oneself on attributes consistent with the divergence, the tendency to narrow the range of comparability becomes stronger (Hypothesis VIII). Initial framework: Lastly, Festinger hypothesized that when there is a range of opinion or ability in a group, the relative strength of the three manifestations of pressures toward uniformity will be different for those who are close to the mode of the group than for those who are distant from the mode. Those close to the mode will have stronger tendencies to change the positions of others, weaker tendencies to narrow the range of comparison, and even weaker tendencies to change their own opinions (Hypothesis IX). Theoretical advances: Since its inception, the initial framework has undergone several advances. Key among these are developments in understanding the motivations that underlie social comparisons, and the particular types of social comparisons that are made. Motives that are relevant to social comparison include self-enhancement, maintenance of a positive self-evaluation, components of attributions and validation, and the avoidance of closure. While there have been changes in Festinger's original concept, many fundamental aspects remain, including the prevalence of the tendency towards social comparison and the general process that is social comparison. Compare and contrast self-evaluation to self-enhancement: According to Thorton and Arrowood, self-evaluation is one of the functions of social comparison. This is one process that underlies how an individual engages in social comparison. Each individual's specific goals will influence how they engage in social comparison. For self-evaluation, people tend to choose a comparison target that is similar to themselves. Specifically, they are most interested in choosing a target who shares some distinctive characteristic with themselves. They also think that knowing the truth about themselves is salutary. Research suggests that most people believe that choosing a similar target helps ensure the accuracy of the self-evaluation. However, individuals do not always act as unbiased self-evaluators, and accurate self-evaluations may not be the primary goal of social comparison. There have been many studies and they have shown that American women tend to be dissatisfied with their looks, they either rate themselves "too plain, old, pimply, fat, hairy, tall" and so much more. Women are much more sensitive than men especially with it having to do with their physical appearance. Due to media digitally altering women's appearance from the width of their torso or arms to the softness of their complexion creates the ideal that thin and flawless is the only acceptable way to look. This leads to diet culture, excessive exercise, and had led to many eating disorders. This form of social comparison can cause harm and can affect the development of the way someone sees themselves.Individuals may also seek self-enhancement, or to improve their self-esteem. They may interpret, distort, or ignore the information gained by social comparison to see themselves more positively and further their self-enhancement goals. People also seek self enhancement because holding favorable illusions about themselves is gratifying. They will also choose to make upward (comparing themselves to someone better off) or downward (comparing themselves to someone worse off) comparisons, depending on which strategy will further their self-enhancement goals. They may also avoid making comparisons period, or avoid making certain types of comparisons. Specifically, when an individual believes that their ability in a specific area is low, they will avoid making upward social comparisons in that area. Unlike for self-evaluation goals, people engaging in social comparison with the goal of self-enhancement may not seek out a target that is similar to themselves. In fact, if a target's similarity is seen as a threat, due to the target outperforming the individual on some dimension, the individual may downplay the similarity of the target to themselves. This notion ties closely to the phenomena in psychology introduced also by Leon Festinger himself as it relates to the diminishing of cognitive dissonance. This dissonance causes an uncomfortableness psychologically which motivates a person to remove the dissonance. The more dissonance there is creates a bigger sense of pressure to remove the dissonance and uncomfortableness caused by it. One does not want to perceive oneself in a way which would downplay one's original belief upon which one's self-esteem is based and therefore in order to reduce the cognitive dissonance, one is willing to change the cognitive representation of the other person whom one compares oneself to, such that one's own belief about oneself remains intact. This effectively leads to the comparison of apples to oranges or psychological denial. Compare and contrast self-evaluation to self-enhancement: Later advances in theory led to self-enhancement being one of the four self-evaluation motives:, along with self-assessment, self-verification, and self-improvement. Compare and contrast self-evaluation to self-enhancement: Upward and downward social comparisons Wills introduced the concept of downward comparison in 1981. Downward social comparison is a defensive tendency that is used as a means of self-evaluation. When a person looks to another individual or group that they consider to be worse off than themselves in order to feel better about their self or personal situation, they are making a downward social comparison. Research has suggested that social comparisons with others who are better off or superior, or upward comparisons, can lower self-regard, whereas downward comparisons can elevate self-regard. Downward comparison theory emphasizes the positive effects of comparisons in increasing one's subjective well-being. For example, it has been found that breast cancer patients made the majority of comparisons with patients less fortunate than themselves. Ashby found similar results in his experiment showing, downward comparison in people subjected to distress from a physical illness such as heart disease or cancer. They also see those who recover from the same illness, and the study found that patients tended to be more optimistic about their own recovery.Although social comparison research has suggested that upward comparisons can lower self-regard, Collins indicates that this is not always the case. Individuals make upward comparisons, whether consciously or subconsciously, when they compare themselves with an individual or comparison group that they perceive as superior or better than themselves in order to improve their views of self or to create a more positive perception of their personal reality. Upward social comparisons are made to self-evaluate and self-improve in the hopes that self-enhancement will also occur. In an upward social comparison, people want to believe themselves to be part of the elite or superior, and make comparisons highlighting the similarities between themselves and the comparison group, unlike a downward social comparison, where similarities between individuals or groups are disassociated.It has also been suggested that upward comparisons may provide an inspiration to improve, and in one study it was found that while breast cancer patients made more downward comparisons, they showed a preference for information about more fortunate others.Another study indicated that people who were dieting often used upward social comparisons by posting pictures of thinner people on their refrigerators. These pictures served as not only a reminder of an individual's current weight, but also as an inspiration of a goal to be reached. In simple terms, downward social comparisons are more likely to make us feel better about ourselves, while upward social comparisons are more likely to motivate us to achieve more or reach higher. Compare and contrast self-evaluation to self-enhancement: Moderators of social comparison Aspinwall and Taylor looked at mood, self-esteem, and threat as moderators that drive individuals to choose to make upward or downward social comparisons. Downward comparisons in cases where individuals had experienced a threat to their self-esteem produced more favorable self-evaluations. Compare and contrast self-evaluation to self-enhancement: High self-esteem and social comparison Aspinwall and Taylor found that upward social comparisons were good in circumstances where the individuals making the comparisons had high self-esteem, because these types of comparisons provided them with more motivation and hope than downward social comparisons. However, if these individuals had experienced a recent threat or setback to their self-esteem, they reported that upward comparisons resulted in a more negative affect than downward comparisons. Compare and contrast self-evaluation to self-enhancement: Low self-esteem and social comparison However, people with low self-esteem or people who are experiencing some sort of threat in their life (such as doing poorly in school, or suffering from an illness) tend to favor downward comparisons over upward comparisons. People with low self-esteem and negative affect improve their mood by making downward comparisons. Their mood does not improve as much as it would if they had high self-esteem. Even for people with low self-esteem, these downward social comparisons do improve their negative mood and allow them to feel hope and motivation for their future. However, these feelings of home could deter them from succeeding due to the harshness of which they judge themselves whether it is their success or their failures. Lower self-esteem can lead an individual to have higher standards for themselves but may never achieve them due to the judgement they receive from within. Compare and contrast self-evaluation to self-enhancement: Affect/mood and its effect on social comparison Individuals who have a negative mood improve their mood by making upward social comparisons, regardless of their level of self-esteem. In addition, both individuals with high self-esteem and low self-esteem who are in a positive mood elevate their mood further by making upward comparisons. However, for those who have recently experienced a threat to their self-esteem or a setback in their life, making upward social comparisons instead of downward social comparisons results in a more negative affect. Self-esteem and existence of a threat or setback in an individual's life are two moderators of their response to upward or downward comparisons. Compare and contrast self-evaluation to self-enhancement: Competitiveness Because individuals are driven upwards in the case of abilities, social comparisons can drive competition among peers. In this regard, the psychological significance of a comparison depends on the social status of an individual, and the context in which their abilities are being evaluated. Compare and contrast self-evaluation to self-enhancement: Social status Competitiveness resulting from social comparisons may be greater in relation to higher social status because individuals with more status have more to lose. In one study, students in a classroom were presented with a bonus point program where, based on chance, the grades for some students would increase and the grades for others would remain the same. Despite the fact that students could not lose by this program, higher-status individuals were more likely to object to the program, and more likely to report a perceived distributive injustice. It was suggested that this was a cognitive manifestation of an aversion to downward mobility, which has more psychological significance when an individual has more status. Compare and contrast self-evaluation to self-enhancement: Proximity to a standard When individuals are evaluated where meaningful standards exist, such as in an academic classroom where students are ranked, then competitiveness increases as proximity to a standard of performance increases. When the only meaningful standard is the top, then high-ranking individuals are most competitive with their peers, and individuals at low and intermediate ranks are equally competitive. However, when both high and low rankings hold significance, then individuals at high and low ranks are equally competitive, and are both more competitive than individuals at intermediate ranks. Compare and contrast self-evaluation to self-enhancement: Models of social comparison Several models have been introduced to social comparison, including the self-evaluation maintenance model (SEM), proxy model, the triadic model and the three-selves model. Self-evaluation maintenance model The SEM model proposes that we make comparisons to maintain or enhance our self-evaluations, focusing on the antagonistic processes of comparison and reflection. Compare and contrast self-evaluation to self-enhancement: Abraham Tesser has conducted research on self-evaluation dynamics that has taken several forms. A self-evaluation maintenance (SEM) model of social behavior focuses on the consequences of another person's outstanding performance on one's own self-evaluation. It sketches out some conditions under which the other's good performance bolsters self-evaluation, i.e., "basking in reflected glory", and conditions under which it threatens self-evaluation through a comparison process. Compare and contrast self-evaluation to self-enhancement: Proxy model The proxy model anticipates the success of something that is unfamiliar. The model proposes that if a person is successful or familiar with a task, then he or she would also be successful at a new similar task. The proxy is evaluated based on ability and is concerned with the question "Can I do X?" A proxy's comparison is based previous attributes. The opinion of the comparer and whether the proxy exerted maximum effort on a preliminary task are variables influencing his or her opinion. Compare and contrast self-evaluation to self-enhancement: Triadic model The Triadic Model builds on the attribution elements of social comparison, proposing that opinions of social comparison are best considered in terms of 3 different evaluative questions: preference assessment (i.e., "Do I like X?"), belief assessment (i.e., "Is X correct?"), and preference prediction (i.e., "Will I like X?"). In the Triadic Model the most meaningful comparisons are with a person who has already experienced a proxy and exhibits consistency in related attributes or past preferences. Compare and contrast self-evaluation to self-enhancement: Three-selves model The three-selves model proposes that social comparison theory is a combination of two different theories. One theory is developed around motivation and the factors that influence the type of social comparison information people seek from their environment and the second is about self-evaluation and the factors that influence the effects of social comparisons on the judgments of self. While there has been much research in the area of comparison motives, there has been little in the area of comparative evaluation. Explaining that the self is conceived as interrelated conceptions accessible depending upon current judgment context and taking a cue from Social Cognitive Theory, this model examines the Assimilation effect and distinguishes three classes of working Self-concept ideas: individual selves, possible selves and collective selves. Media influence: The media has been found to play a large role in social comparisons. Researchers examining the social effects of the media have used social comparison theory have found that in most cases women tend to engage in upward social comparisons with a target other, which results in more negative feelings about the self. The majority of women have a daily opportunity to make upward comparison by measuring themselves against some form of societal ideal. Social comparisons have become a relevant mechanism for learning about the appearance-related social expectations among peers and for evaluating the self in terms of those standards. (Jones, 2001, P. 647). Media influence: Although men do make upward comparisons, research finds that more women make upward comparisons and are comparing themselves with unrealistically high standards presented in the media. As women are shown more mainstream media images of powerful, successful and thin women, they perceive the "ideal" to be the norm for societal views of attractive. In recent years, social media platforms such as Facebook and Instagram have made this more widespread, since social media makes it easier to compare yourself to the "ideal". Some women have reported making upward comparisons in a positive manner for the purposes of self-motivation, but the majority of upward comparisons are made when the individual is feeling lesser and therefore evoke a negative connotation. Media influence: Self-perceived similarities with role models on social media can also affect self-esteem for both men and women. Having more self-perceived similarities with a role model can help increase self-esteem, while having less can decrease self-esteem. Social comparison with peers on social media can also lead to feelings of self-pity or satisfaction. The desire for social comparison can cause FoMO and compulsive checking of social media sites. Criticisms: Many criticisms arose regarding Festinger's similarity hypothesis. Deutsch and Krauss argued that people actually seek out dissimilar others in their comparisons maintaining that this is important for providing valuable self-knowledge, as demonstrated in research. Ambiguity also circulated about the important dimensions for similarity. Goethals and Darley clarified the role of similarity suggesting that people prefer to compare those who are similar on related attributes such as opinions, characteristics or abilities to increase confidence for value judgments, however those dissimilar in related attributes are preferred when validating one's beliefs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GEISA** GEISA: GEISA - GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Spectroscopic Information) is a computer-accessible spectroscopic database, designed to facilitate accurate forward radiative transfer calculations using a line-by-line and layer-by-layer approach. It was started in 1974, at Laboratoire de Météorologie Dynamique (LMD) in France. GEISA is maintained by the ARA [1] group at LMD (Ecole Polytechnique) [2] for its scientific part and by the ETHER group [3] (CNRS Centre National de la Recherche Scientifique-France) at IPSL (Institut Pierre Simon Laplace) for its technical part. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI (Infrared Atmospheric Sounding Interferometer on board of the METOP European satellite) through the GEISA/IASI database derived from GEISA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ulysses (text editor)** Ulysses (text editor): Ulysses is a text editor for Apple macOS, iPad, and iPhone. It is targeted at creative writers who do not want to worry about text layout, formatting, or other distractions, and who want to focus on their words. It supports Markdown for basic formatting. History: Ulysses was named after the novel Ulysses by James Joyce. The Ulysses software was originally released for Mac OS and in version 2.5 support was added for iPhone and iPad. The license for Ulysses has been a subscription model (SaaS) since version 11.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydroxylamine reductase (NADH)** Hydroxylamine reductase (NADH): In enzymology, a hydroxylamine reductase (NADH) (EC 1.7.1.10) is an enzyme that catalyzes the chemical reaction. NH3 + NAD+ + H2O ⇌ hydroxylamine + NADH + H+The 3 substrates of this enzyme are NH3, NAD+, and H2O, whereas its 3 products are hydroxylamine, NADH, and H+. Hydroxylamine reductase (NADH): This enzyme belongs to the family of oxidoreductases, specifically those acting on other nitrogenous compounds as donors with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is ammonium:NAD+ oxidoreductase. Other names in common use include hydroxylamine reductase, ammonium dehydrogenase, NADH-hydroxylamine reductase, N-hydroxy amine reductase, hydroxylamine reductase (NADH2), and NADH2:hydroxylamine oxidoreductase. This enzyme participates in nitrogen metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zooming (filmmaking)** Zooming (filmmaking): In filmmaking and television production, zooming is the technique of changing the focal length of a zoom lens (and hence the angle of view) during a shot – this technique is also called a zoom. The technique allows a change from close-up to wide shot (or vice versa) during a shot, giving a cinematographic degree of freedom. But unlike changes in camera position, zooming does not change the perspective (the relative sizes of near and far objects); it only magnifies or reduces the size of the entire image as a whole. Zooming (filmmaking): Zooming can either be performed towards longer focal lengths, giving a "zoom in" effect: The filmed object will then increase in apparent size, and fewer objects become visible on film. Or it is performed towards shorter focal lengths, giving a "zoom out" effect: The filmed object will shrink in apparent size, and more objects come into view. Zooming (filmmaking): The speed of the zoom allows for a further degree of cinematographic freedom. Combined with a dolly camera move it is possible to create the dolly zoom effect.Noticeable cinematographic examples for the use of slow zooms include the 1975 film Barry Lyndon by Stanley Kubrick, the 1979 film Stalker by Andrei Tarkovsky, and the 1994 film Sátántangó by Béla Tarr.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shiba Kumar Rai** Shiba Kumar Rai: Shiba Kumar Rai is a Professor of medical microbiology and member of National Planning Commission (NPC) of the government of Nepal. He is also founding chairman of Shi-Gan Health Foundation, Shi-Gan Int’l College of Science & Technology (SICOST), Nat’l Institute of Tropical Medicine & Public Health Research, DASHIMURA Foundation & DEVIS Multipurpose (P) Ltd. He has been awarded “Mahendra Vidhya Bhshan ‘Ga’, ‘Kha’ & ‘Ka’ gold medals” (all three classes) by Government of Nepal (from then king of Nepal) for securing ‘first class first’ position in Bachelor & Master degrees & for doing PhD (in Medicine), respectively. Books/Chapter in books: Prof. Rai has authored/coauthored 25 health/medical science related books and/or chapters in books. They are: 1. Practical Hematology, TU Institute of Medicine 1979 (in Nepali) 2. Practical Biochemistry, TU Institute of Medicine 1979 (in Nepali) 3. Laboraratory Accidents, First Aids & Prevention, TU Institute of Medicine Family Health Project, 1983 (in Nepali) 4. Human Parasitology. Tribhuvan Univ Inst of Med Family Health Project / UNFPA, Kathmandu, Nepal 1985 (in Nepali). Books/Chapter in books: 5. Notes on Medical Microbiology; Leela & Munnu, Ktm, Nepal 1987. 6. (Editors: Nakanishi M, Shrestha HG & Rai SK): Text Book of Medical Laboratory Technology JICA Med Edu Project, Tribhuvan Univ Inst of Med, Ktm, Nepal, 1996. 7. Parasitology (Chapter-IX); In Text Book of Medical Laboratory Technology, Edit: Nakanishi M, Shrestha HG & Rai SK, JICA Med Edu Project, Tribhuvan Univ Inst of Med, Ktm, Nepal, 1996: 445~599. 8. Serology and Immunology (Chapter-X). In Text Book of Medical Laboratory Technology, Edit: Nakanishi M, Shrestha HG & Rai SK, JICA Med Edu Project, Tribhuvan Univ Inst of Med, Ktm, Nepal, 1996: 601~33. 9. Rai SK, Uga S, Kataoka N & Matsumura T: Atlas of Medical Parasitology; Kobe Univ School of Med, Kobe, Japan 1996. 10. Matsumura T, Rahman MS, Rai SK, Saito A & Uga S: In Biological Contamination of Environment and Its Control Measures: Present Situation of Sand Pit Contamination by Dogs and Cats in All About Antimicrobials; Edit: Yuge O, Yokoyama H & Sakagami Y; Sen-I Sha, Osaka, Japan 1997: 316~24 (Text in Japanese). 11. Rai SK (Editor): Manual of Basic Laboratory Techniques for Peripheral Health Care Centers in Developing Countries. Laligurans Kai, Japan, 2001. Also contributed following Chapters in the manual: • Rai SK, Rai G & Ishiyama S: Parasitology (Chapter-V); Page: 93~118 and • Rai SK, Rai G & Ono K: Semen Analysis (Chapter-VII); Page: 135~8. 12. Rai SK, Thapa M & Sharma BK. Practical Microbiology for Undergraduate Medical Students. SMB, Ktm, Nepal 2003. 13. Kurokawa M, Ono K & Rai SK. Viruses Causing Pediatric Diarrhea. In: Current Trends in Pediatrics Vol. 1 (Chapter 24); Edit: Mathur GP & Mathur S. Academa Publishers, Delhi, India 2005: 227~37. 14. Rai SK. Parasitic Diseases in Nepal. In: Asian Parasitology, Vol 1: Food-borne Helminthiasis in Asia; Editor-in-chief: Yano A; Vol Edit: Arizono N, Chai JY, Nawa Y & Takahashi Y. Federation of Asian Parasitologists, Japan 2005: 305~18. 15. Rai SK. Toxoplasma Infection in Nepal: An Overview. In: Asian Parasitology, Vol. 4: Toxoplasmosis and Babesiosis in Asia; Edit-in-chief: Yano A; Vol Edit: Yano A, Nam H-W, Anuar A K, Shen J, Saito A & Igarashi I. Federation of Asian Parasitologists, Japan 2005: 82~96. 16. Rai SK, Kurokawa M & Ono K. Viruses Causing Respiratory Infections in Children. In: Current Trends in Pediatrics Vol. 2 (Chapter 15); Edit: Mathur GP & Mathur S. Academa Publishers, Delhi, India 2006: 99~110. 17. Kurokawa M, Ono K & Rai SK. Congenital Infection. In: Current Trends in Pediatrics. Vol. 3; Edit: Mathur GP & Mathur S. Academa Publishers, Delhi, India 2007: 97~119. 18. Ohno Y, Hirai K, Rai SK, Sherchand JB & Shrestha M. Food consumption patterns, nutrient intake, and serum components among middle-aged and elderly Nepalese. In: Nutrition for the Middle Aged and Elderly. Edit: Bernhardt NE & Kasko AM. Nova Sci Publishers, Inc. New York, USA 2008: 11~29. 19. Kimura K, Rai SK & Ono K. Environmental Health. In: The Global Health in Developing Countries - The Health Care in Nepal. Edit: Ono K, Yufune S. Fukuro Suppan, Okayama, Japan 2009: 17~36 (Text in Japanese). 20. Rai SK: Nepalese Health System: Community-Based Health Workers (CBHWs) and Female Community Health Volunteers (FCHVs), Chapter-2 in Global Health in Asia; Edit: Uga S, Uesugi Y, Osawa K, Ono K, Kido Y, Shintani M, Tanaka K & Horie O; Kobe Univ Graduate School of Health Sci, Kobe, Japan 2012: 19~42. 21. Rai SK. Zoonotic Parasitic Diseases in Nepal. In Parasitic Zoonoses in Asian-Pacific Regions 2012; Edit: Tokoro M & Uga S (First Edition); Sankeisha Co, Nagoya, Japan 2013: 10-5. 22. Gupta BP, Rai SK & Sherchand JB. Text Book of Medical Virology. Bhundipuran Prakashan, Ktm, Nepal Jan 2015. 23. Kurokawa M, Ono K & Rai SK. Viral Infections (Measles, Rubella, Varicella-Zoster, Roseaola, Epstein-Barr virus & Parvovirus B19) In: Text Book of Pediatrics. Edit: Mathur GP, Mathur S & Faridi MMA. CBS Publishers & Distributers Pvt. Ltd. (New Delhi, Benguluru, Chennai, Kochi, Mumbai, Pune), India; Feb. 2015: 367~375. 24. Rai SK. Changing Trends of Infectious Diseases in Nepal. In: Infectious Disease & Nanomedicine III. Edit: Adhikari R & Thapa S. Advances in Experimental Medicine & Biology. Vol. 1052, Springer, Singapore; May 2018: 19~38. 25. Amatya R & Rai SK. Biofilm Producing Microorganisms and Clinical Implications in the Context of Developing Countries. Chapter in: Emerging Concepts in Bacterial Biofilms: Molecular Mechanisms and Control Strategies. Edit. Thomas S et al, The Cambridge Scholars Publishing Ltd, England, UK, 2019 (Accepted for publication). Research Papers: Until now, Prof. Rai has contributed 186 research papers in peer reviewed/indexed professional journals & appears in Google Scholar” (Shiba Kumar Rai, Nepal). Research Papers: Prof. Rai has received “best research paper award” in 2007 (from Prime Minister of Nepal) & subsequently in 2011 (from President of Nepal). Recently (April 12, 2019) he has also been awarded “Health Research Lifetime Achievement Award-2018” by Nepal Health Research Council (Government of Nepal) considering his 179 research papers & 25 books and/or Chapters in books (till 2018) (highest number among Nepalese). Prof Rai is the first Nepali to be awarded this award. Social Works: He founded Hattigaunda Welfare Society (Sewa Samaj) in 1997 and established sisterly relationship with Doshou Kai (Alumni Association) and Extension Center of Kobe Tokiwa College in Kobe, Japan. Under this relationship, students and faculty exchange is being done on every alternate year since start of relationship in 1997. For the purpose of health research, he has also founded Nat'l Institute of Tropical Medicine and Public Health Research (NITMPHR). Dashimura Foundation is new social organization he has founded which is currently working on the establishment of a "senior citizen care home" together with other social works focused on quality education (by establishing the prizes/awards to be given to best performing students). He is also involved in Lions Club International.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermal simulations for integrated circuits** Thermal simulations for integrated circuits: Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view. Generation and transfer of heat: Fourier's law At macroscopic level, Fourier's law states a relation between the transmitted heat per unit time per unit area and the gradient of temperature: q=−κ∇T Where κ is the thermal conductivity, [W·m−1 K−1]. Generation and transfer of heat: Joule heating Electronic systems work based on current and voltage signals. Current is the flow of charged particles through the material and these particles (electrons or holes), interact with the lattice of the crystal losing its energy which is released in form of heat. Joule Heating is a predominant mechanism for heat generation in integrated circuits and is an undesired effect in most of the cases. For an ohmic material, it has the form: Q=j2ρ Where j is the current density in [A·m−2], ρ is the specific electric resistivity in [ Ω ·m] and Q is the generated heat per unit volume in [W·m−3]. Generation and transfer of heat: Heat-transfer equation The governing equation of the physics of the heat transfer problem relates the flux of heat in space, its variation in time and the generation of power by the following expression: ∇(κ(T)∇T)+g=ρC∂T∂t Where κ is the thermal conductivity, ρ is the density of the medium, C is the specific heat, α=κρC , the thermal diffusivity and g is the rate of heat generation per unit volume. Heat diffuses from the source following the above equation and solution in an homogeneous medium follows a Gaussian distribution. Techniques to solve heat equation: Kirchhoff transformation To get rid of the temperature dependence of κ , Kirchhoff transformation can be performed θ=Ts+1κs∫TsTκ(T)dT where κs=κ(Ts) and Ts is the heat sink temperature. When applying this transformation, the heat equation becomes: α∇2θ+ακsg=∂θ∂t where α=κρC is called the diffusivity, which also depends on the temperature. To completely linearize the equation, a second transformation is employed: αsτ=∫0tα(θ)dt yielding the expression: ∇2θ−1αs∂θ∂τ=−gκs Simple, direct application of this equation requires approximation. Additional terms arising in the transformed Laplacian are dropped, leaving the Laplacian in its conventional form. Techniques to solve heat equation: Analytical solutions Although analytical solutions can only be found for specific and simple cases, they give a good insight to deal with more complex situations. Analytical solutions for regular subsystems can also be combined to provide detailed descriptions of complex structures. In Prof. Batty's work, a Fourier series expansion to the temperature in the Laplace domain is introduced to find the solution to the linearized heat equation. Techniques to solve heat equation: Example This procedure can be applied to a simple but nontrivial case: an homogeneous cube die made out of GaAs, L=300 um. The goal is to find the temperature distribution on the top surface. The top surface is discretized into smaller squares with index i=1...N. One of them is considered to be the source. Techniques to solve heat equation: Taking the Laplace transform to the heat equation: ∇2Θ¯−sksΘ¯=0 where Θ¯=sθ−θ(τ=0) Function Θ¯ is expanded in terms of cosine functions for the x and y variables and in terms of hyperbolic cosines and sines for z variable. Next, by applying adiabatic boundary conditions at the lateral walls and fix temperature at the bottom (heat sink temperature), thermal impedance matrix equation is derived: Δθi=∑j=1NRTHij(t)Pj(t) Where the index j accounts for the power sources, while the index i refers to each small area. Techniques to solve heat equation: For more details about the derivation, please see Prof. Batty's paper,. Techniques to solve heat equation: The below figure shows the steady state temperature distribution of this analytical method for a cubic die, with dimensions 300 um. A constant power source of 0.3W is applied over a central surface of dimension 0.1L x 0.1L. As expected, the distribution decays as it approaches to the boundaries, its maximum is located at the center and almost reaches 400K Numerical solutions Numerical solutions use a mesh of the structure to perform the simulation. The most popular methods are: Finite difference time-domain (FDTD) method, Finite element method (FEM) and method of moments (MoM). Techniques to solve heat equation: The finite-difference time-domain (FDTD) method is a robust and popular technique that consists in solving differential equations numerically as well as certain boundary conditions defined by the problem. This is done by discretizing the space and time, and using finite differencing formulas, thus the partial differential equations that describe the physics of the problem can be solved numerically by computer programs. Techniques to solve heat equation: The FEM is also a numerical scheme employed to solve engineering and mathematical problems described by differential equations as well as boundary conditions. It discretizes the space into smaller elements for which basis functions are assigned to their nodes or edges. Basis functions are linear or higher order polynomials. Applying the differential equation and the boundary conditions of the problem to the basis functions, a system of equations is formulated using either the Ritz or Galerkin method. Finally, a direct or iterative method is employed to solve the system of linear equations. For the thermal case, FEM method is more suitable due to the nonlinearity nature of the thermal properties. Techniques to solve heat equation: Example The previous example can be solved with a numerical method. For this case, the cube can by discretized into rectangular elements. Its basis functions can be chosen to be a first order approximation (linear): Nie=12ξ(1∓ζ),i=1,4 Nie=12η(1∓ζ),i=2,5 Nie=12(1−ξ−η)(1∓ζ),i=3,6 where ζ=2(z−zc)/hz . If zc=0 , then ζ=2z/hz Using this basis functions and after applying Galerkin's method to the heat transfer equation, a matrix equation is obtained: [S]{θ}+[R]ddt{θ}={B} where, Rij=∫vNjNidV Sij=k∫v∇Nj.∇NidV Bi=kκs∫Ω1Nip(x,y)dΩ+kκs∫vNigdV−kTo∑j=0ND∫v∇NjD.∇NidV .This expressions can be evaluated by using a simple FEM code. For more details, please see. The figure below shows the temperature distribution for the numerical solution case. This solution shows very good agreement with the analytical case, its peak also reaches 390 K at the center. The apparent lack of smoothness of the distribution comes from the first order approximation of the basis functions and this can be solved by using higher order basis functions. Also, better results might be obtained by employing a denser mesh of the structure; however, for very dense meshes the computation time increases a lot, making the simulation non-practical. Techniques to solve heat equation: The next figure shows a comparison of the peak temperature as a function of time for both methods. The system reaches steady state in approximately 1ms Model order reduction The numerical methods such as FEM or FDM derive a matrix equation as shown in the previous section. To solve this equation faster, a method called Model order reduction can be employed to find an approximation of lower order. This method is based on the fact that a high-dimensional state vector belongs to a low-dimensional subspace [1]. Techniques to solve heat equation: Figure below shows the concept of the MOR approximation: finding matrix V, the dimension of the system can be reduced to solve a simplified system. Therefore, the original system of equation: C{x}′+K{x}=F{u} becomes: VTCV{z}′+VTKV{z}=VTF{u} Whose order is much lower than the original making the computation much less expensive. Once the solution is obtained, the original vector is found by taking the product with V. Conclusion: The generation of heat is mainly produced by joule heating, this undesired effect has limited the performance of integrated circuits. In the preset article heat conduction was described and analytical and numerical methods to solve a heat transfer problem were presented. Using these methods, steady state temperature distribution was computed as well as the peak temperature as a function of time for a cubic die. For an input power of 0.3 W (or 3.333 e8W/m2 ) applied over a single surface source on the top of a cubic die a peak increment of temperature in the order of 100 K was computed. Such increase in temperature can affect the behavior of surrounding semiconductor devices. Important parameters like mobility change drastically. That is why the heat dissipation is a relevant issue and must be considered for circuit designing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Society for Integrative and Comparative Biology** Society for Integrative and Comparative Biology: The Society for Integrative and Comparative Biology is organized to integrate the many fields of specialization which occur in the broad field of biology.The society was formed in 1902 as the American Society of Zoologists, through the merger of two societies, the "Central Naturalists" and the "American Morphological Society" (founded in 1890). The Ecological Society of America split from it in 1915, and another society of geneticists also split from it in 1930. In 1996 the name was changed to the Society for Integrative and Comparative Biology.The society publishes two scientific journals: the bimonthly journal Integrative and Comparative Biology (formerly the American Zoologist) and Evolution & Development. It is organized in a flexible structure with many lightweight divisions. As of 2014, it has approximately 3500 members.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Simics** Simics: Simics is a full-system simulator or virtual platform used to run unchanged production binaries of the target hardware. Simics was originally developed by the Swedish Institute of Computer Science (SICS), and then spun off to Virtutech for commercial development in 1998. Virtutech was acquired by Intel in 2010. Currently, Simics is provided by Intel in a public release and sold commercially by Wind River Systems, which was in the past a subsidiary of Intel. Simics: Simics contains both instruction set simulators and hardware models, and is or has been used to simulate systems such as Alpha, ARM (32- and 64-bit), IA-64, MIPS (32- and 64-bit), MSP430, PowerPC (32- and 64-bit), RISC-V (32- and 64-bit), SPARC-V8 and V9, and x86 and x86-64 CPUs. Many different operating systems have been run on various simulated virtual platforms, including Linux, MS-DOS, Windows, VxWorks, OSE, Solaris, FreeBSD, QNX, RTEMS, UEFI, and Zephyr. The NetBSD AMD64 port was initially developed using Simics before the public release of the chip. The purpose of simulation in Simics is often to develop software for a particular type of hardware without requiring access to that precise hardware, using Simics as a virtual platform. This can applied both to pre-release and pre-silicon software development for future hardware, as well as for existing hardware. Intel uses Simics to provide its ecosystem with access to future platform months or years ahead of the hardware launch.The current version of Simics is 6 which was released publicly in 2019. Simics runs on 64-bit x86-64 machines running Microsoft Windows and Linux (32-bit support was dropped with the release of Simics 5, since 64-bit provides significant performance advantages and is universally available on current hardware). The previous version, Simics 5, was released in 2015.Simics has the ability to execute a system in forward and reverse direction. Reverse debugging can illuminate how an exceptional condition or bug occurred. When executing an OS such as Linux in reverse using Simics, previously deleted files reappear when the deletion point is passed in reverse and scrolling and other graphical display and console updates occur backwards as well. Simics: Simics is built for high performance execution of full-system models, and uses both binary translation and hardware-assisted virtualization to increase simulation speed. It is natively multithreaded and can simulate multiple target (or guest) processors and boards using multiple host threads. It has been used to run simulations containing hundreds of target processors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boundary cell** Boundary cell: Boundary cells (also known as border cells or boundary vector cells) are neurons found in the hippocampal formation that respond to the presence of an environmental boundary at a particular distance and direction from an animal. The existence of cells with these firing characteristics were first predicted on the basis of properties of place cells. Boundary cells were subsequently discovered in several regions of the hippocampal formation: the subiculum, presubiculum and entorhinal cortex. Boundary cell: O'Keefe and Burgess had noted that the firing fields of place cells, which characteristically respond only in a circumscribed area of an animal's environment, tended to fire in 'corresponding' locations when the shape and size of the environment was altered. For example, a place cell that fired in the northeastern corner of a rectangular environment might continue to fire in the northeastern corner when the size of the environment was doubled. To explain these observations, the Burgess and O'Keefe groups developed a computational model (Boundary Vector Cell - or BVC - model) of place cells that relied on inputs sensitive to the geometry of the environment to determine where a given place cell would fire in environments of different shapes and sizes. The hypothetical input cells (BVCs) responded to environmental boundaries at particular distances and allocentric directions from the rat. Boundary cell: Separate studies emerging from different research groups identified cells with these characteristics in the subiculum, entorhinal cortex and pre- and para-subiculum where they were described variously as "BVCs", "boundary cells" and "border cells". These terms are somewhat interchangeable; the critical defining functional characteristics of associated with the different labelling schemes are rather arbitrary and any functional differences in cells found in different anatomical regions are not yet fully clear. For example, neurons classified as "border cells" may include some that fire at short range to any environmental boundary (regardless of direction). Additionally, the BVC model predicted the existence of a small proportion of cells with longer range tunings (i.e., firing parallel to, but at some distance from boundaries) and few such cells have been described to date. In general, although the general predictions of the BVC model regarding the existence of geometric boundary sensitive inputs were confirmed by the empirical observations it prompted, the more detailed characteristics such as the distribution of distance and direction tunings remain to be determined. Boundary cell: In medial entorhinal cortex border/boundary cells comprise about 10% of local population, being intermingled with grid cells and head direction cells. During development MEC border cells (and HD cells but not grid cells) show adult-like firing fields as soon as rats are able to freely explore their environment at around 16-18 days old. This suggests HD and border cells, rather than grid cells, provide the first critical spatial input to hippocampal place cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OMX Tallinn** OMX Tallinn: The OMX Tallinn (OMXT) is the main stock market index in Estonia. It reflects changes in the prices of shares listed in the Main and Investor lists of the Estonian Stock Exchange, and the Tallinn Stock Exchange. It uses the Paasche Index Formula. The value of the index was calibrated to 100 on 3 June 1996. Before 2005 the index was known as TALSE.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Taberer report** Taberer report: Taberer report is a report published in 1907 and composed by Henry Taberer and J. Glenn Leary.The work demonstrated evidence of male same-sex relationships in gold mines near Johannesburg, South Africa. Background: Taberer was born on a mission station and was a fluent speaker of the languages used by the local population: he claimed to speak them more fluently than he did English. He was able to use this talent effectively when he became manager of the South African government's Native Labour Bureau and adviser to the Native Recruiting Corporation for the Chamber of Mines at a time of increasing industrial unrest. Leary was another respected official and he worked as a magistrate.A large disparity between sexes existed within Mozambican migrant worker community in the South Africa. In 1886 there were 30,000 men but only 90 women of Mozambican descent in the Johannesburg region.Before the establishment of colonial criminal labour systems, homosexual relationships were not punished. Report: Taberer and Leary were tasked with researching "mine marriages" between male African miners. Local missionaries had complained about immoralities that happened in the gold mines, and the complaints resulted in the investigation. Taberer coauthored the report with Leary. The report was based on evidence collected during a nine-day period in January 1907. Testimonies were gathered from 54 African and European witnesses. The questions and answers were remarkably explicit about sexual activity and motivations.A Chopi miner working in the mines explained to Taberer that miners who engaged in homosexual acts with young men tried to avoid contracting a venereal disease. The view is supported by evidence that there were lower rates of venereal disease among Tsonga people compared to those Africans who visited female prostitutes. The report successfully dismissed claims by Reverend Baker that the homosexual relations were violent and formed as formal marriages. Relationships between miners often included sex, but male "wives" also gave domestic services for their partners.Taberer and Leary proposed several solutions for curtailing homosexual relationships between miners, but they were rejected. For instance, they proposed that large numbers of female wives should be allowed to migrate with the men or that large-scale prostitution should be allowed. Ultimately, only screens around beds were banned throughout all industrial compounds of the South Africa. Report: Reliability Taberer's neutrality can be questioned. Taberer and Leary's approach for collecting data minimised the amount of recorded anal sex.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sodium arsenide** Sodium arsenide: Sodium arsenide, also known as trisodium arsenide, is the inorganic compound of sodium and arsenic with the formula Na3As. It is a dark colored solid that degrades upon contact with water or air. It is prepared by the reaction of the elements at 200–400 °C. The compound is mainly of interest as exhibiting an archetypal structure. The normal pressure "sodium arsenide" phase is adopted by many alkali metal pnictides. At 3.6 gigapascals, Na3As adopts the Li3Bi structure, which is another archetypal structure. Sodium arsenide is a crystalline solid used as a semiconductor and in photo optic applications. Its IUPAC name is disodioarsanylsodium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lagrange invariant** Lagrange invariant: In optics the Lagrange invariant is a measure of the light propagating through an optical system. It is defined by H=nu¯y−nuy¯ ,where y and u are the marginal ray height and angle respectively, and ȳ and ū are the chief ray height and angle. n is the ambient refractive index. In order to reduce confusion with other quantities, the symbol Ж may be used in place of H. Ж2 is proportional to the throughput of the optical system (related to étendue). For a given optical system, the Lagrange invariant is a constant throughout all space, that is, it is invariant upon refraction and transfer. Lagrange invariant: The optical invariant is a generalization of the Lagrange invariant which is formed using the ray heights and angles of any two rays. For these rays, the optical invariant is a constant throughout all space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flufenamic acid** Flufenamic acid: Flufenamic acid (FFA) is a member of the anthranilic acid derivatives (or fenamate) class of nonsteroidal anti-inflammatory drugs (NSAIDs).: 718  Like other members of the class, it is a cyclooxygenase (COX) inhibitor, preventing the formation of prostaglandins. FFA is known to bind to and reduce the activity of prostaglandin F synthase and activate TRPC6.It is not widely used in humans as it has a high rate (30–60%) of gastrointestinal side effects.: 310  It is generally not available in the US. It is available in some Asian and European countries as a generic drug.Scientists led by Claude Winder from Parke-Davis invented FFA in 1963, along with fellow members of the class, mefenamic acid in 1961 and meclofenamic acid in 1964.: 718 Although flufenamic acid was at one time informally referred to as "Fluffy" (see history cache), this pet name could also refer to flufenoxine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Allele age** Allele age: Allele age (or mutation age) is the amount of time elapsed since an allele first appeared due to mutation. Estimating the time at which a certain allele appeared allows researchers to infer patterns of human migration, disease, and natural selection. Allele age can be estimated based on (1) the frequency of the allele in a population and (2) the genetic variation that occurs within different copies of the allele, also known as intra-allelic variation. While either of these methods can be used to estimate allele age, the use of both increases the accuracy of the estimation and can sometimes offer additional information regarding the presence of selection. Allele age: Estimating allele age based on the allele’s frequency is based on the fact that alleles in high frequency are older than alleles in low frequency (assuming the absence of selection). Of course, many alleles of interest are under some type of selection. Because alleles that are under positive selection can rise to high frequency very quickly, it is important to understand the mechanisms that underlie allele frequency change, such as natural selection, gene flow, genetic drift, and mutation. Allele age: Estimating allele age based on intra-allelic variation is based on the fact that with every generation, linkage with other alleles (linkage disequilibrium) is disrupted by recombination and new variation in linkage is created via new mutations. The analysis of intra-allelic variation to assess allele age depends on coalescent theory. There are two different approaches that can be used to analyze allele age based on intra-allelic variation. First, a phylogenetics approach extrapolates an allele’s age by reconstructing a gene tree and dating the root of the tree. This approach is best when analyzing ancient, as opposed to recent, mutations. Second, a population genetics approach estimates allele age by using mutation, recombination, and demography models instead of a gene tree. This type of approach is best for analyzing recent mutations. Allele age: Recently, Albers and McVean (2018) proposed a non-parametric method to estimate the age of an allele, using probabilistic, coalescent-based models of mutation and recombination. Specifically, their method infers the time to the most recent common ancestor (TMRCA) between hundreds or thousands of chromosomal sequence (haplotype) pairs. This information is then combined using a composite likelihood approach to obtain an estimate of the time of mutation at a single locus. This methodology was applied to more than 16 million variants in the human genome, using data from the 1000 Genomes Project and the Simons Genome Diversity Project, to generate the atlas of variant age. History: Population geneticists, Motoo Kimura and Tomoko Ohta, were the first to analyze the association between an allele’s frequency and its age in the 1970s. They showed that the age of a neutral allele can be estimated (assuming a large, randomly mating population) by ln ⁡(p) Where p represents the allele frequency and t1 is the expected age, measured in units of 2N generations.More recent studies, however, have focused on the analysis of intra-allelic variation. In 1990, Jean-Louis Serre and his team were the first to assess allele age by analyzing intra-allelic variation. Using a sample of 240 French families, they surveyed two restriction fragment length polymorphisms (RFLP) sites (E1 and E2) that are closely linked to an allele (ΔF508) at the cystic fibrosis locus (CFTR). Recombination theory allows for the calculation of x(t), the expected frequency of E2 in association with the allele ΔF508 in generation t, and y, the frequency of E2 on chromosomes without the ΔF508 allele. The recombination rate, c, is assumed to be known, and so the allele age can be calculated as an estimate of t. History: ln ln ⁡x(t)−y1−y Although Serre et al. (1990) were the first to employ this method, it became increasingly popular after the Risch et al. study in 1995, which analyzed alleles in an Ashkenazi Jewish population. Examples of allele age estimations: Many intra-allelic variation studies suggest that disease-causing alleles arose rather recently in human history. Examples of allele age estimations: Cystic fibrosis The Serre et al. (1990) study estimated that an allele causing cystic fibrosis arose approximately 181.4 generations ago. Therefore, they estimated that the allele age to be between 3,000 and 6,000 years ago. However, other studies have obtained drastically different estimates. Morral et al. (1994) suggested a minimum age of 52,000 years ago. A reanalysis of the Morral et al. (1994) data by Slatkin and Rannala (2000) estimated an allele age of approximately 3,000 years, which is consistent with the Serre et al. (1990) results. Examples of allele age estimations: AIDS-resistance allele (CCR5) A 32 base pair deletion at the CCR5 locus results in resistance to the HIV infection, which causes AIDS. Individuals who are homozygous for the mutation experience complete resistance to the infection, while heterozygotes only experience partial resistance to the infection, resulting in a delayed onset of AIDS. A study by Stephens et al. in 1998 suggested that this allele originated approximately 27.5 generations, or 688 years ago. These results were obtained using intra-allelic variation analysis. This same study also used the allele frequency and the Kimura-Ohta model to estimate allele age. This method provided very different results, suggesting that the allele appeared more than 100,000 years ago. Stephens et al. (1996) argue that the discrepancy between these age estimates strongly suggest recent positive selection for the CCR5 mutation. Because the CCR5 mutation also offers resistance to smallpox, these results are consistent with the idea that the CCR5 mutation first rose to higher frequency due to positive selection during smallpox outbreaks in European history before being positively selected for due to its role in HIV resistance. Examples of allele age estimations: Lactase persistence Many adults are lactose intolerant because their bodies cease production of the enzyme lactase post childhood. However, mutations in the promoter region of the lactase gene (LCT) result in the continued production of lactase throughout adulthood in certain African populations, a condition known as lactase persistence. A study conducted by Sarah Tishkoff and her team shows that the mutation for lactase persistence has been under positive selection since its recent appearance approximately 3,000 to 7,000 years ago. These dates are consistent with the rise of cattle domestication and pastoralist lifestyles in these regions, making the lactase persistence mutation a strong example of gene-culture co-evolution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EIA-608** EIA-608: EIA-608, also known as "line 21 captions" and "CEA-608", was once the standard for closed captioning for NTSC TV broadcasts in the United States, Canada and Mexico. It was developed by the Electronic Industries Alliance and required by law to be implemented in most television receivers made in the United States. It specifies an "Extended Data Service", which is a means for including a VCR control service with an electronic program guide for NTSC transmissions that operates on the even line 21 field, similar to the TeleText based VPS that operates on line 16 which is used in PAL countries. EIA-608: EIA-608 captions are transmitted on either the odd or even fields of line 21 with an odd parity bit in the non-visible active video data area in NTSC broadcasts, and are also sometimes present in the picture user data in ATSC transmissions. It uses a fixed bandwidth of 480 bit/s per line 21 field for a maximum of 32 characters per line per caption (maximum four captions) for a 30 frame broadcast. The odd field captions relate to the primary audio track and the even field captions related to the SAP or secondary audio track which is generally a second language translation of the primary audio, such as a French or Spanish translation of an English-speaking TV show. EIA-608: Raw EIA-608 caption byte pairs are becoming less prevalent as digital television replaces analog. ATSC broadcasts instead use the EIA-708 caption protocol to encapsulate both the EIA-608 caption pairs as well as add a native EIA-708 stream. EIA-608 has had revisions with the addition of extended character sets to fully support the representation of the Spanish, French, German languages, and cross section of other Western European languages. EIA-608 was also extended to support two byte characters for the Korean and Japanese markets. The full version of EIA-708 has support for more character sets and better caption positioning options; however, because of existing EIA-608 hardware and revisions to the format, there has been little or no real world use of the format besides simple 608 to 708 inline conversions. Channels: EIA-608 defines four channels of caption information, so that a program could, for example, have captions in four different languages. There are two channels, called 1 and 2 by the standard, in each of the two fields of a frame. The channels are often presented to users numbered simply as CC1-2 for the odd field and CC3-4 for the even field. Due to bandwidth limitations on either field, CC1 and CC3 are the only ones used, meaning that there has been little use for the second channel. Early Spanish SAP captioned broadcasts first used the second channel CC2 because the original caption decoders only read the first odd field, but later switched to using CC3 for bandwidth reasons. Due to the same bandwidth reasons XDS was never used by Spanish-speaking stations. Channels: Within each channel, there are two streams of information which might be considered sub-channels: one carries "captions" and the other "text." The latter is not in common use due to the lack of hardware support and bandwidth available. Text is signaled by the use of text commands and can be used for a formatted URL string with a 16-bit checksum that designates a web site that the captions relate to or a local station communication channel. Channels: This layering is based on the OSI Protocol Reference Model: DVD GOP User Data Insertion: The user data structure that follows a H.262 GOP header is as follows (the same would apply after an ISO/IEC 14496-2 GOP header): bslbf: bit string, left bit first ; uimsbf: unsigned integer, most significant bit first Caption blocks are inserted after the sequence and GOP headers, so each block is for one second of video which would end up being one or two long lines or three to four short lines of text. Also that means if the caption_block_count is greater than 30 then the block contains both interleaved caption fields and one could devise the framing rate from the caption_block_count. However since the data is grouped together the framing rate will almost always be 30/1.001, unlike the ATSC method that inserts one byte pair for each field after the picture header making framing rates of 24/1.001 possible for HD content. Since when a decoder does a 3:2 pull-down for NTSC output the captions will remain in sync. DVB Transport Insertion: The packet-ed structure that is inserted before the H.222 video packet is as follows for a frame of associated video: bslbf: bit string, left bit first ; uimsbf: unsigned integer, most significant bit first This structure was designed for any digital VBI data and was optimized to carry three or more 43-byte Teletext packets. e.g. a page header and two associated lines. For Teletext subtitles, the data_unit_id is set to 3. In this form, captions have to be separated into byte pairs spread over frames in one second of video rather than grouped into one block as with the DVD structure. The same is true for Teletext subtitles with more than one line of text. SDI/MXF SMPTE 291M Insertion: The packet-ed structure that is inserted before the SMPTE 259M active video frame or MXF essence video packet is coded as follows for a frame of associated video: bslbf: bit string, left bit first ; uimsbf: unsigned integer, most significant bit first This structure was designed for any digital audio or metadata that is to be synchronized with a video frame. SDI transports every eight bits in a 10 bit aligned packet, unlike MXF which is byte aligned and the ancillary flag bytes are replaced by 128 bit header. Extended Data Service: The EIA-608 data stream format includes Extended Data Service (XDS), a variety of information about the transmission. It is all optional,: program name offensiveness rating (violence, sex, etc.) program category (drama, game show, etc.) Characters: There are three sets of characters that the EIA-608 stream can direct the receiver to display: basic characters, special characters, and extended characters. A single two-byte EIA-608 command (represented by a single VBI line) can specify two basic characters, one special character, or one extended character. Extended characters are a later addition to the standard and their decoding is optional. EIA-608 provides controls for the color of the foreground and background of the text, underlining, blinking, and italics. The default color scheme is white characters on a black background, all opaque. The Transparent Space special character implies a transparent background even in the absence of any background control commands. As the foreground of this character is a blank space, it really means a gap in the close caption text. In these examples P = odd parity bit Non-Caption Data This is used to either pad out the field line when no captions are sent or for the eXtended Data Service. Characters: +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ null pad |P|0|0|0|0|0|0|0| |P|0|0|0|0|0|0|0| XDS metadata |P|0|0|0| CLASS | |P|0|0|0| TYPE | +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 15 8 7 0 Basic North American character set A command with bits 13 or 14 on directs the receiver to display two basic characters at the current cursor position for the current mode (closed caption or text). Each character is a code point (identifies the character to display), as follows. Characters: +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ modified 7-bit ASCII |P| CHARACTER1 | |P| CHARACTER2 | +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 The code is almost identical to ASCII; the exceptions are shown in red. In the table, SB represents a solid block. The apostrophe (code 27), which may originally have been intended to be a neutral apostrophe as in ASCII, is recommended to be rendered as a right single quotation mark (Unicode U+2019). For a neutral single quote/apostrophe, the plain single quote from the extended character set should be used. Special North American character set The only typical use in North America of this set is the use of the eighth note character to denote changes from spoken dialogue to singing or musical only scenes. It is an acceptable broadcast engineering practice when translating EIA-608 to Teletext for PAL compatible countries to substitute this character for a number sign because of its similarity to a sharp. A command to display a special character has a first byte of 0x11 or 0x19 (depending upon channel). The second byte is a code point in the range 0x30–0x3F as follows. Characters: +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ |P|0|0|1|C|0|0|1| |P|0|1|1| CHAR | +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 P = odd parity ; C = second channel toggle TM is short for unregistered trademark and should be represented in superscript (™). TS in the table above represents a "transparent space" or non-breaking space. Finally, the Eighth note (♪) is used to denote singing or background music in captions. Characters: Extended Western European character set +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ |P|0|0|1|C|0|1|S| |P|0|1|CHARACTER| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 P = odd parity ; C = second channel toggle ; S = char set toggle These extended character sets are rarely used due to most European countries using the BBC Ceefax based Teletext system. The Ceefax system is more prone to character errors due to the greater number of data bits (337 versus 16) encoded per VBI field, these errors occur either on noise prone analog transmissions or connections. A command to display an extended Spanish/French or miscellaneous character has a first byte of 0x12 or 0x1A (depending upon channel). Characters: A command to display an extended Portuguese/German/Danish character has a first byte of 0x13 or 0x1B (depending upon channel).The second byte is a code point in the range 0x20-0x3F is as follows SM is short for service mark and should be represented in superscript. The single quote mark is a curly left and double quote marks are curly left and right. The plus signs refer to top left, top right, lower left and lower right corners for box drawing. Non-Western Norpak Character Sets: When used all standard and extended character sets are unused in favor of the following predefined sets, care must be taken to not emulate any control commands. This is an extension submitted to the CEC by Norpak who made a similar extension to the Teletext format for the Chinese market. The main use has been to provide double byte code point captioning to the Japanese, Taiwanese and South Korean markets. A command to switch character sets has a first byte of 0x17 or 0x1F (depending upon channel). The second byte is a character set reference in the range 0x24-0x2A as follows +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ |P|0|0|1|C|1|1|1| |P|0|1|0|CHARSET| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 Control commands: Bits 15 and 7 are always odd parity bits. Bit 11 is always the channel bit. Control commands: Preamble address code with masking bit 15,11 and 7 as already defined above can be interpreted from following table Row Preamble Standard Address and Style (Default Row 11 = 0,top rows 1-4 = 1-2,bottom rows 12-13 = 3) +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ preamble style |P|0|0|1|C|0|ROW| |P|1|N|0|STYLE|U| preamble address |P|0|0|1|C|0|ROW| |P|1|N|1|CURSR|U| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 15 8 7 0 Row Preamble Extended Address and Style (Bottom Rows 14-15 = 0,middle rows = 5-10 = 1-3) +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ preamble style |P|0|0|1|C|1|ROW| |P|1|N|0|STYLE|U| preamble address |P|0|0|1|C|1|ROW| |P|1|N|1|CURSR|U| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 15 8 7 0 P = odd parity ; C = second channel toggle U = underline toggle ; N = next row down toggle (if style or cursor position not set, defaults are regular white text with black background at cursor = 0, cursor - multiple of 4) text style enumerations: {white=0,green,blue,cyan,red,yellow,magenta,italic white} The row bits specify which of the fifteen screen rows should contain the caption text: row 11 (0000), 1 (0010), 2 (0011), 3, 4, 12, 13, 14, 15, 5, 6, 7, 8, 9, or 10 (1111). Control commands: The attributes bits allow 16 possibilities, which are: white (0000), green, blue, cyan, red, yellow, magenta, italics, indent 0, indent 4, indent 8, indent 12, indent 16, indent 20, indent 24, indent 28 (1111). For a midrow code these are as follows: Bits 14, 13, 10, 9, 6 and 4 are always 0, bits 12, 8 and 5 are always 1. Bits 3, 2 and 1 form the color attribute 0001X10X(see the listing of attributes). Bit 0 indicates underline. Control commands: Mid Row Style Change (style remains in effect until either next change or end of row signaled by a control or preamble) +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ bg color |P|0|0|1|C|0|0|0| |P|0|1|0|COLOR|T| midrow style |P|0|0|1|C|0|0|1| |P|0|1|0|STYLE|U| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 15 8 7 0 +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ no bg |P|0|0|1|C|1|1|1| |P|0|1|0|1|1|0|1| black text |P|0|0|1|C|1|1|1| |P|0|1|0|1|1|1|U| +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+ 15 8 7 0 15 8 7 0 P = odd parity ; C = second channel toggle T = partially transparent ; U = underline toggle bg color enumerations: {white=0,green,blue,cyan,red,yellow,magenta,black} For other control codes these are as follows: Bits 14, 13, 9, 6 and 4 are always 0, bits 12, 10 and 5 are always 1. Bit 8 chooses between line 21 and 284. Bits 3, 2, 1 and 0 identify the particular action. Control commands: The command bits allow 16 possibilities, which are: resume caption loading (0000), backspace (0001), delete to end of row (0100), roll-up captions 2-rows, roll-up captions 3 rows, roll-up captions 4-rows, flash on (0.25 seconds once per second), resume direct captioning, text restart, resume text display, erase displayed memory, carriage return, erase nondisplayed memory, end of caption (1111). For tabs these are as follows: Bits 14, 13, 6, 4, 3, 2 are always 0, bits 12, 10, 9, 8, 5 are always 1. Bits 1 and 0 determine the number of tab offsets. Considering parity bit already ignored hex value have of 2 byte data is following command:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rapid update cycle** Rapid update cycle: The Rapid Update Cycle (RUC) was an American atmospheric prediction system that consisted primarily of a numerical forecast model and an analysis system to initialize the model. Rapid update cycle: The RUC was designed to provide accurate short-range (0- to 12-hr, later expanded to 18-hr in 2010) numerical forecast guidance for weather-sensitive users, such as those in the aviation community. Significant weather forecasting problems that occur in the 0- to 12-hr range include severe weather in all seasons (for example, tornadoes, thunderstorms, snow, and ice storms) and hazards to aviation (for example, clear air turbulence, icing, and downbursts). The RUC ran at the highest frequency of any forecast model at the National Centers for Environmental Prediction (NCEP), assimilating recent observations to provide very high frequency updates of current conditions and short-range forecasts. This update frequency was only once an hour (the standard interval for ASOS observation reporting), and with computational limitations and the time required to assimilate all of the data, there is approximately a one-hour delay in producing the forecasts. Because of this, it was common practice to use a one-hour forecast from the RUC as a current analysis, as the one-hour forecast would come out only a few minutes before the time it is forecasting for. There is also little possibility for error in a one-hour forecast, meaning that the RUC's one-hour forecast would not usually vary greatly from the actual state of the atmosphere at that particular point in time. Rapid update cycle: The RUC was decommissioned on May 1, 2012; it was replaced by the Rapid Refresh (RR or RAP) model, based on the WRF. Like the RUC, the Rapid Refresh model also runs hourly out to 18 hours on a 13 km (8.1 mi) grid spacing, but also covers a wider area. An experimental High Resolution Rapid Refresh (HRRR) ran by the Earth System Research Laboratory (ESRL) offers 3 km (1.9 mi) resolution at 15-minute intervals. A backup version of the RUC continued to run until that too was stopped on May 15, 2013, thus formally bringing an end to the model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bidirectional cell** Bidirectional cell: Bidirectional cells are a subset of neurons found in mammalian brains in region MT. They are characterised by having a peak response to visual motion in two, opposing, directions. They were discovered in 1984 by Albright et al.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coal mine bump** Coal mine bump: A coal mine bump (a bump, a mine bump, a mountain bump, or a rock burst) is a seismic jolt occurring within an underground mine due to the explosive collapse of one or more support pillars.In room and pillar mining, tunnels are advanced in a rectangular pattern resembling city streets (tunnels), leaving behind blocks (pillars) of coal. To a miner, a partially completed tunnel resembles a room dug into the coal seam. As mining proceeds, the weight of rock overburden previously supported by coal mined from rooms is redistributed to pillars. If that weight exceeds the strength of a pillar, the pillar can fail by crushing or exploding. An explosive failure is called a “bump.”In the eastern United States' coalfields, bumps are more likely when the overburden is at least 500 feet (150 m); where a strong, overlying stratum, such as sandstone, occurs near the coalbed; and with a strong, inflexible floor. In the United States, the number of deaths from bumps had dropped off dramatically since the early 1990s, but fatalities are more common in the West where mines often run deeper. Bumps are three times more likely in room-and-pillar mines, and are even more common in mines that do retreat mining, in which the pillars are removed as the miners retreat towards the mine entrance with the intent of allowing an orderly collapse of the mine. Incidents: The Springhill Mining Disaster was a bump that occurred in Springhill, Nova Scotia, Canada on October 23, 1958. Incidents: Debate over the cause of the August 6, 2007, Crandall Canyon Mine disaster, which took place 1,800 feet beneath the surface, raised public awareness about coal mine bumps. Seismologists at the University of Utah and the University of California, Berkeley concluded that an associated 3.9 magnitude temblor was likely caused not by an earthquake, but by the collapse itself. The mine's owner, Robert E. Murray, adamantly disagreed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Egg pie** Egg pie: Egg pie is a sweet Filipino pie with an egg custard filling and a characteristic toasty brown top made from egg whites. It is made with flour, sugar, milk, butter, and eggs. Calamansi juice or zest may also be added. It is a type of custard pie. Egg pies are commonly sold in bakeries in the Philippines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded