text
stringlengths
60
353k
source
stringclasses
2 values
**Star Wars trading card** Star Wars trading card: Star Wars trading card usually refers to a non-sport card themed after a Star Wars movie or television show. However a common colloquial reference to trading card can also include reference to stickers, wrappers, or caps (pog) often produced along the same theme. Usually produced as either promotional or collectible memorabilia relating to Star Wars, the cards can depict anything from screen still imagery to original art. In addition, there have been various companies that have issued promotional Star Wars trading cards that include reference to or information about that corresponding company. Star Wars trading card: An avid collecting and trading community of these cards and sets exists worldwide. New cards released commercially are available through most major retailers and wholesalers, however some cards are specially issued as exclusive and only available though a specific source. A thriving secondary market also exists on eBay in various categories. Star Wars trading cards are different from the various Star Wars collectible card game cards. A few of the most valuable sets in the Star Wars Trading Cards market are the 1977 Star Wars Series I, The Star Wars Galaxy Series, Star Wars MasterWorks, along with the Star Wars 3D Widevision sets. History: Star Wars trading cards were first produced and released by Topps in 1977 to coincide with the first Star Wars movie, and they have remained the official producer of Star Wars trading cards in the United States ever since. Various manufacturers handle the property around the rest of the world. History: In 1977, a photograph appeared on a Topps Star Wars trading card in which C-3PO appeared to have a prominent phallus. In 2007, the official Star Wars website hypothesized that this was caused by a part of the suit that had fallen into place just as the photograph was taken. However, in 2019 Daniels clarified that the costume had become compromised during C-3PO's oil bath in the film; the warm liquid had caused the costume to separate, leading to "an over-exposure of plastic in that region". Topps editor Gary Gerani, who wrote and photo-selected all the Star Wars card sets and pencil-designed the distinctive, often-reused 1977 front design, says he has been asked about this particular card more than any other. In 2015, Topps created the Star Wars Card Trader app for iPhone, iPad and Android. This app allows users to open packs, collect digital cards, and trade them with other users right in the app. History: In 2019 Topps began to commission original art for the app. It has produced original art card sets by Derek Laufman, Darrin Pepe, Kevin-John, Robert Jimenez, Uzuri Art and others. Harry N. Abrams published three books collecting the art of Star Wars trading cards and stickers; the first volume, featuring the original movie's cards, was published in 2015, followed by the Empire Strikes Back and Return of the Jedi volumes in 2016. Card series: Topps, Inc. All series are shown, but not necessarily all the cards in each series. For example, not all promos and mail-away cards are listed: Vintage era Star Wars Series 1 (1977) - Blue border with white stars. 66 cards and 11 stickers (space with green, yellow or red interior border) (Cards 1-66 Stickers 1-11). Star Wars Series 1 (1977) (Scanlens-Australia) - Blue border with white stars. 72 Card set. Star Wars Series 1 (1977) (Topps UK/Ireland) - Blue border with white stars. 66 Card set. No stickers. Star Wars Series 1 (1977) (Argentina) - Blue border with white stars. 66 cards and 11 stickers. 16 puzzle pieces. Star Wars Series 2 (1977) - Red border. 66 cards and 11 stickers (black or space with red interior border) (Cards 67-132 Stickers 12-22). Star Wars Series 2 (1977) (Topps UK/Ireland) - Red border. 66 cards. (Cards 1A-66A. No stickers). Star Wars Series 3 (1977) - Yellow border. 66 cards and 11 stickers (black film cell border) (Cards 133-198 Stickers 23-33). Star Wars Series 4 (1977) - Green border. 66 cards and 11 stickers (red film cell border) (Cards 199-264 Stickers 34-44). Star Wars Series 5 (1977) - Orange border. 66 cards and 11 stickers (orange film cell border) (Cards 265-330 Stickers 45-55). Star Wars Sugar-Free Gum Wrappers (1977/78) - 56 wrappers. Star Wars The Empire Strikes Back, Series 1 (1980) - Red border. 132 cards and 33 stickers. Star Wars The Empire Strikes Back, Series 2 (1980) - Blue border. 132 cards and 33 stickers. Star Wars The Empire Strikes Back, Series 3 (1980) - Yellow border. 88 cards and 22 stickers. Star Wars The Empire Strikes Back Giant Photo Cards (1980) - 30 5x7 cards. Star Wars Return of the Jedi, Series 1 (1983) - Red border. 132 cards and 33 stickers. Star Wars Return of the Jedi, Series 2 (1983) - Blue border. 88 cards and 22 stickers. O-Pee-Chee Star Wars, Series 1 (1977) - Blue border with white stars. 66 cards. 11 stickers. Card series: Star Wars, Series 2 (1977) - Red border, Card Numbers: 67-132 - Sticker Numbers: 12-22 Star Wars, Series 3 (1977) - Orange border, Card Numbers: 133-264 - Sticker Numbers: 34-55 Star Wars, Return Of The Jedi (1983) - Red border, Card Numbers: 1-132 Modern era Star Wars Galaxy Series I (1993) - 140 base and 6 etched foil cards, plus separate 140 silver stamped base, 6 refractor foil, and one holographic card from the Millennium Falcon Factory set. Card series: Star Wars Galaxy Series II (1994) - 135 base and 6 etched foil cards, plus separate 135 silver stamped base, 6 refractor foil, and one holographic card from the Factory Tin. Star Wars Galaxy Series III (1995) - 90 base, 90 first-day, 12 Lucas art, 6 etched foil, and 6 clearzone cards. Star Wars Widevision (1995) - 120 base and 10 finest cards. Star Wars Widevision Metal (1995) - 6 steel cards. Star Wars The Empire Strikes Back Widevision (1995) - 144 base, 10 chromium, and 6 poster cards Star Wars The Empire Strikes Back Widevision Metal (1995) - 6 steel cards. Star Wars Return of the Jedi Widevision (1996) - 144 base, 10 finest, 6 poster cards, plus 1 3d case-topper card. Star Wars Caps (1995) - 70 base, 10 galaxy, and 24 slammer caps. Star Wars Master Visions (1995) - 36 oversized cards. Star Wars Finest (1996) - 90 base, 4 matrix, 6 embossed, and 90 refractor cards. Star Wars 3Di Widevision (1996) - 63 base and 1 motion card, all widevision. Star Wars Shadows of the Empire (1996) - 80 base, 6 foil, and 4 embossed cards. Star Wars Vehicles (1997) - 72 base, 4 cut-away, and 2 3d cards. Star Wars Trilogy: The Complete Story, Retail (1997) - 72 base and 6 laser cards, all widevision. Star Wars Trilogy Special Edition, Hobby (1997) - 72 base, 6 laser, 2 hologram, and 1 3Di cards, all widevision. Star Wars Chrome Archives (1999) - 90 base, 9 chrome, and 4 clear cards. Prequel era Star Wars Episode I: The Phantom Menace Widevision, Series I (1999) - 80 base, 40 expansion, 8 chrome, 10 foil cards, and 16 stickers. Also 5 oversized foil cards exist from tin sets. Star Wars Episode I: The Phantom Menace Widevision, Series II (1999) - 80 base, 6 embossed (retail), 6 embossed (hobby), 4 chromium (retail), 4 chromium (hobby), and 3 box topper cards. Star Wars Episode I 3D Widevision (2000) - 46 base and 2 multi-motion cards, all widevision. 1 promotional card. Star Wars Evolution (2001) - 93 base, 12 A, 8 B, and autograph cards. 4 promotional cards. Star Wars Episode II: Attack of the Clones (2002) - 100 base, 10 silver foil, 8 prismatic, and 5 panoramic cards. 5 promotional cards. Star Wars Episode II: Attack of the Clones Widevision (2002) - 80 base and 26 autograph cards. 7 promotional cards. Star Wars Clone Wars (2004) - 90 base, 10 battle motion, and 14 artists' sketch cards, and 10 stickers. 3 promotional cards. Star Wars Heritage (2004) - 120 base, 12 etched foil, autograph, and sketch cards, and 30 stickers. 6 promotional cards. Star Wars Revenge of the Sith (2005) - 90 base, 6 etched foil (hobby/retail), 4 morphing (hobby/retail), 3 holograms, 10 tattoo, 10 embossed foil, 10 stickers, and 1 morphing case-topper. 5 promotional cards. Star Wars Revenge of the Sith Widevision (2005) - 80 base, 10 chrome (retail), 10 chrome (hobby), 10 flix-pix, and 5 autograph cards. 2 promotional cards. Post-Saga era Star Wars Evolution Update (2006) - 90 base, 20 A, 15 B, sequentially numbered C (100 each of 1C and 2C), 1D redemption card, 10 Galaxy Crystal (retail), 6 foil, autograph cards. Two promotional cards. Star Wars 30th Anniversary (August 2007) - 120 base, 120 red/120 blue/120 gold parallel (gold parallels numbered to 30), 27 triptych puzzle, autograph, sketch, 330 different foil stamped box loaders, 9 animation cel, 9 magnet cards, and 3 retail bonus cards. Six promotional cards. Star Wars Clone Wars (July 2008) - 90 base, 90 gold-stamped foil parallel, 10 animation cel, 10 foil, 5 motion, 5 Target red animation cel, 5 Wal-Mart blue animation cel cards. Two promotional cards. Star Wars Clone Wars Trading Card Stickers (October 2008) - 90 base, 10 foil stickers, 10 die cut pop ups, 10 temporary tattoos, ~10 magnet cards. Star Wars Holiday Special (2008) - 11 cards. Star Wars Galaxy Series IV (2009) 120 base. Only 11,000 boxes produced. Card series: Star Wars Clone Wars Widevision, Season I (November 2009) - 80 base, 8 Series 2 Preview cards, 80 Silver Foil stamped parallel cards (500 sequentially numbered of each card), 80 Gold Foil stamped 1/1 parallel cards, 20 foil character cards, 10 animation clear cel cards, 5 flex pix cards, sketch cards, animator sketch cards, 13 autograph cards Star Wars Galaxy Series V (2010) 120 base. Card series: Star Wars The Empire Strikes Back 3D Widevision (2010) - 48 base cards. One promotional card. Sketch Cards. 8 Autograph Cards Star Wars Clone Wars Rise of the Bounty Hunters, Season II (2010) Star Wars Clone Wars Dog Tags Trading Cards (November 2010) - 24 cards and matching dog tags. Star Wars Galaxy Series VI (2011) 120 Base. (One of the most scarce and valuable Star Wars trading card sets of all-time.) Star Wars Dog Tags Trading Cards (August 2011) - 24 cards and matching dog tags. Star Wars Galaxy Series VII (2012) 110 base. Star Wars Galactic Files (2012) Star Wars Jedi Legacy (2013) Star Wars Galactic Files, Series II (2013) Star Wars Illustrated (2013) Star Wars Return of the Jedi 3D Widevision (2014) - 44 base cards. Limited to 2000 sets offered directly by Topps. Star Wars Revenge of the Sith 3D Widevision (2015) - 44 base cards. Limited to 2000 sets offered directly by Topps. Star Wars Masterwork (2015) Star Wars Attack of the Clones 3D Widevision (2016) - 44 base cards. Limited to 2500 sets offered directly by Topps. Star Wars Masterwork (2016) Star Wars The Force Awakens 3D Widevision (2017) 44 base cards. Limited to 2000 sets offered directly by Topps. Star Wars Masterwork (2017) Star Wars Chrome Perspectives Jedi Vs. Sith - 100 base cards and many others Star Wars Galaxy Series 2018 (2018) 100 base. Star Wars Masterwork (2018) Star Wars Masterwork (2019) Other manufacturers Star Wars Wonder Bread (1977) - 16 card set. Star Wars (Panini European, 1977) - 256 stickers. Star Wars (Tokyo Queen, 1977) - 32 card set. Star Wars (ADPAC Stickers for General Mills Breakfast Cereals, 1977) - 16 card set. Star Wars (General Mills, 1978) - 16 card set. Star Wars Empire Strikes Back (Burger King & Coca-Cola, 1981) - 36 cards. Card series: Star Wars Return of the Jedi (Panini, 1983) - 180 2 1/8"x3" album stickers Star Wars (Metallic Images, 1994) - 20 metal cards Star Wars Art of Ralph McQuarrie (Metallic Images, 1996) - 20 metal cards Star Wars Bounty Hunters (Metallic Images, 1998) - 6 metal cards Star Wars Dark Empire (Metallic Images, 1995-6) - two 6 series of metal cards Star Wars Jedi Knights (Metallic Images, 1998) - 6 metal cards Star Wars Shadows of the Empire (Metallic Images, 1997) - 6 metal cards Star Wars The Empire Strikes Back (Metallic Images, 1995) - 20 metal cards Star Wars Return of the Jedi (Metallic Images, 1995) - 20 metal cards Star Wars (Panini European, 1997) - 216 stickers. Card series: Star Wars (Panini American, 1997) - 66 stickers. Star Wars, Special Edition, Trilogy (Merlin, 1997) - 125 cards
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Liquidity preference** Liquidity preference: In macroeconomic theory, liquidity preference is the demand for money, considered as liquidity. The concept was first developed by John Maynard Keynes in his book The General Theory of Employment, Interest and Money (1936) to explain determination of the interest rate by the supply and demand for money. The demand for money as an asset was theorized to depend on the interest foregone by not holding bonds (here, the term "bonds" can be understood to also represent stocks and other less liquid assets in general, as well as government bonds). Interest rates, he argues, cannot be a reward for saving as such because, if a person hoards his savings in cash, keeping it under his mattress say, he will receive no interest, although he has nevertheless refrained from consuming all his current income. Instead of a reward for saving, interest, in the Keynesian analysis, is a reward for parting with liquidity. According to Keynes, money is the most liquid asset. Liquidity is an attribute to an asset. The more quickly an asset is converted into money the more liquid it is said to be.According to Keynes, demand for liquidity is determined by three motives: the transactions motive: people prefer to have liquidity to assure basic transactions, for their income is not constantly available. The amount of liquidity demanded is determined by the level of income: the higher the income, the more money demanded for carrying out increased spending. Liquidity preference: the precautionary motive: people prefer to have liquidity in the case of social unexpected problems that need unusual costs. The amount of money demanded for this purpose increases as income increases. Liquidity preference: speculative motive: people retain liquidity to speculate that bond prices will fall. When the interest rate decreases people demand more money to hold until the interest rate increases, which would drive down the price of an existing bond to keep its yield in line with the interest rate. Thus, the lower the interest rate, the more money demanded (and vice versa).The liquidity-preference relation can be represented graphically as a schedule of the money demanded at each different interest rate. The supply of money together with the liquidity-preference curve in theory interact to determine the interest rate at which the quantity of money demanded equals the quantity of money supplied (see IS/LM model). Alternatives: A major rival to the liquidity preference theory of interest is the time preference theory, to which liquidity preference was actually a response. Because liquidity is effectively the ease at which assets can be converted into currency, liquidity can be considered a more complex term for the amount of time committed in order to convert an asset. Thus, in some ways, it is extremely similar to time preference. Criticisms: In Man, Economy, and State (1962), Murray Rothbard argues that the liquidity preference theory of interest suffers from a fallacy of mutual determination. Keynes alleges that the rate of interest is determined by liquidity preference. In practice, however, Keynes treats the rate of interest as determining liquidity preference. Rothbard states "The Keynesians therefore treat the rate of interest, not as they believe they do—as determined by liquidity preference—but rather as some sort of mysterious and unexplained force imposing itself on the other elements of the economic system."Criticism emanates also from post-Keynesian economists, such as circuitist Alain Parguez, professor of economics, University of Besançon, who "reject[s] the keynesian liquidity preference theory ... but only because it lacks sensible empirical foundations in a true monetary economy".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pole building framing** Pole building framing: Pole framing or post-frame construction (pole building framing, pole building, pole barn) is a simplified building technique that is an alternative to the labor-intensive traditional timber framing technique. It uses large poles or posts buried in the ground or on a foundation to provide the vertical structural support, along with girts to provide horizontal support. The method was developed and matured during the 1930s as agricultural practices changed, including the shift toward engine-powered farm equipment and the demand for cheaper, larger barns and storage areas. History: Pole building design was pioneered in the 1930s in the United States originally using utility poles for horse barns and agricultural buildings. The depressed value of agricultural products in the 1920s and 1930s and the emergence of large, corporate farming in the 1930s created a demand for larger, cheaper agricultural buildings. As the practice took hold, rather than using utility poles, materials such as pole barn nails were developed specifically for this type of construction, making the process more affordable and reliable. Today, almost any low-rise structure can be quickly built using the post-frame construction method.Pole barn construction was a quick and economical method of adding outbuildings on a farm as agriculture shifted to equipment-dependent and capital-intensive agriculture—necessitating shelter for tractors, harvesters, wagons and the like in much greater quantities and sizes. Around North America, many pole-built structures are still readily seen in rural and industrial areas. Construction: Poles, from which these buildings get their name, are natural shaped or round wooden timbers 4 to 12 inches (102 to 305 mm) in diameter. The structural frame of a pole building is made of tree trunks, utility poles, engineered lumber or chemically pressure-treated squared timbers which may be buried in the ground or anchored to a concrete slab. Generally the posts are evenly spaced 8 to 12 feet (2.44 to 3.66 m) apart except to allow for doors. Buried posts have the benefit of providing lateral stability so no braces are needed. Buried posts may be driven into the ground or set in holes then filled with soil, crushed stone, or concrete. Construction: Pole buildings do not require walls but may be open shelters, such as for farm animals or equipment or for use as picnic shelters. Construction: Enclosed pole buildings have exterior curtain walls formed by girts fastened to the exterior of the posts at intervals about 2 feet (0.61 m) on center that carry the siding and any interior load. The walls may be designed as a shear wall to provide structural stability. Other girt systems include framing in between the posts rather than on the outer side of the posts. Siding materials for a pole building are most commonly rolled-rib 29-gauge enameled steel cut to length in 32-or-36-inch (813 or 914 mm) widths attached using color-matched screws with rubber washers to seal the holes. However, any standard siding can be used, including T1-11, vinyl, lap siding, cedar and even brick. Using sidings other than metal may require first installing sheathing, such plywood, oriented strand board or boards. Construction: On two walls, usually the long walls, the dimensional lumber girts at the top of the walls are doubled, one on the inside and one on the outside of the posts, and usually through-bolted with large carriage bolts to support the roof load. The roof structure is frequently a truss roof supporting purlins or laths, or built using common rafters. Wide buildings with common rafters need interior rows of posts. Sometimes rafters may be attached directly to the poles. The roof pitch of pole buildings is usually low and the roof form is usually gable or lean-to. Metal roofing is commonly used as the roofing and siding material on pole buildings. Construction: The floor may be soil, gravel, concrete slab, or framed of wood. Modern developments: In modern developments the pole barns of the 1930s have become pole buildings for use as housing, commercial use, churches, picnic shelters or storage buildings. In the process more often than not, the poles have become posts of squared-off, pressure-treated timbers. These structures have the potential to replicate the functionality of other buildings, but they may be more affordable and require less time to construct. The most common use for pole buildings is storage buildings as it was on the farms, but today they may be for the storage of automobiles, boats, and RVs along with many other household items that would normally be found in a residential garage, or commercially as the surroundings for a light industry or small corporate offices with attached shops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lunar Receiving Laboratory** Lunar Receiving Laboratory: The Lunar Receiving Laboratory (LRL) was a facility at NASA's Lyndon B. Johnson Space Center (Building 37) that was constructed to quarantine astronauts and material brought back from the Moon during the Apollo program to reduce the risk of back-contamination. After recovery at sea, crews from Apollo 11, Apollo 12, and Apollo 14 walked from their helicopter to the Mobile Quarantine Facility on the deck of an aircraft carrier and were brought to the LRL for quarantine. Samples of rock and regolith that the astronauts collected and brought back were flown directly to the LRL and initially analyzed in glovebox vacuum chambers. Lunar Receiving Laboratory: The quarantine requirement was dropped for Apollo 15 and later missions. The LRL was used for study, distribution, and safe storage of the lunar samples. Between 1969 and 1972, six Apollo space flight missions brought back 382 kilograms (842 pounds) of lunar rocks, core samples, pebbles, sand, and dust from the lunar surface—in all, 2,200 samples from six exploration sites. Other lunar samples were returned to Earth by three automated Soviet spacecraft, Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976, which returned samples totaling 300 grams (about 3/4 pound). Lunar Receiving Laboratory: In 1976, some of the samples were moved to Brooks Air Force Base in San Antonio, Texas, for second-site storage. In 1979, a Lunar Sample Laboratory Facility was built to serve as the chief repository for the Apollo samples: permanent storage in a physically secure and non-contaminating environment. The facility includes vaults for the samples and records, and laboratories for sample preparation and study. The Lunar Receiving Laboratory building was later occupied by NASA's Life Sciences division, contained biomedical and environment labs, and was used for experiments involving human adaptation to microgravity.In September 2019, NASA announced that the Lunar Receiving Laboratory had not been used for two years and would be demolished.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Even–Paz protocol** Even–Paz protocol: The Even–Paz algorithm is an computationally-efficient algorithm for fair cake-cutting. It involves a certain heterogeneous and divisible resource, such as a birthday cake, and n partners with different preferences over different parts of the cake. It allows the n people to achieve a proportional division. History: The first published algorithm for proportional division of a cake was the last diminisher algorithm, published in 1948. Its run-time complexity was O(n^2). in 1984, Shimon Even and Azaria Paz published their improved algorithm, whose run-time complexity is only O(n log n). Description: The algorithm uses a divide-and-conquer strategy, it is possible to achieve a proportional division in time O(n log n). Each partner is asked to draw a line dividing the cake into a right and left part such that he believes the ratio is ⌊n/2⌋:⌈n/2⌉. The cuts are required to be non-intersecting; a simple way to guarantee this is to allow only horizontal lines or only vertical lines. The algorithm sorts the n lines in increasing order and cuts the cake in the median of the lines, i.e. at the ⌊n/2⌋th line. E.g., if there are 5 partners that draw lines at x=1, x=3, x=5, x=8 and x=9, then the algorithm cuts the cake vertically at x=5. Description: The algorithm assigns to each of the two parts the partners whose line is inside that part, i.e. the partners that drew the first ⌊n/2⌋ lines get assigned to the left part, the others to the right part. E.g., the partners that drew lines at x=1, x=3 and x=5 are assigned to the left part and the other partners are assigned to the right part. Each part is divided recursively among the partners assigned to it.It is possible to prove by induction that every partner playing by the rules is guaranteed a piece with a value of at least 1/n, regardless of what the other partners do. Description: Thanks to the divide-and-conquer strategy, the number of iterations is only O(log n), in contrast to O(n) in the Last Diminisher procedure. In each iteration, each partner is required to make a single mark. Hence, the total number of marks required is O(n log n). Optimality: Several years after the publication of the Even–Paz algorithm, it was proved that every deterministic or randomized proportional division procedure assigning each person a contiguous piece must use Ω(n log n) actions.Moreover, every deterministic proportional division procedure must use Ω(n log n) actions, even if the procedure is allowed to assign to each partner a non-contiguous piece, and even if the procedure is allowed to only guarantee approximate fairness.These hardness results imply that the Even–Paz algorithm is the fastest possible algorithm for achieving full proportionality with contiguous pieces, and it is the fastest possible deterministic algorithm for achieving even partial proportionality and even with disconnected pieces. The only case in which it can be improved is with randomized algorithms guaranteeing partial proportionality with disconnected pieces; see Edmonds–Pruhs algorithm. Randomized version: It is possible to use randomization in order to reduce the number of marks. The following randomized version of the recursive halving procedure achieves a proportional division using only O(n) mark queries on average. The idea is that, in each iteration, instead of asking all partners to make a half-value mark, only some partners are asked to make such marks, while the other partners only choose which half they prefer. The partners are sent either to the west or to the east according to their preferences, until the number of partners in each side is n/2. Then a cut is made, and each group of n/2 partners divides its half recursively.In the worst case we still need n-1 marks per iteration so the worst-case number of marks required is O(n log n). However, on average only O(log n) marks are required per iteration; by solving a recurrence formula it is possible to show that the average number of marks required is O(n). Randomized version: Note that the total number of queries is still O(n log n), since each partner has to select a half.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laminectomy** Laminectomy: A laminectomy is a surgical procedure that removes a portion of a vertebra called the lamina, which is the roof of the spinal canal. It is a major spine operation with residual scar tissue and may result in postlaminectomy syndrome. Depending on the problem, more conservative treatments (e.g., small endoscopic procedures, without bone removal) may be viable. Method: The lamina is a posterior arch of the vertebral bone lying between the spinous process (which juts out in the middle) and the more lateral pedicles and the transverse processes of each vertebra. The pair of laminae, along with the spinous process, make up the posterior wall of the bony spinal canal. Although the literal meaning of laminectomy is 'excision of the lamina', a conventional laminectomy in neurosurgery and orthopedics involves excision of the supraspinous ligament and some or all of the spinous process. Removal of these structures with an open technique requires disconnecting the many muscles of the back attached to them. A laminectomy performed as a minimal spinal surgery procedure is a tissue-preserving surgery that leaves more of the muscle intact and spares the spinal process. Another procedure, called the laminotomy, is the removal of a mid-portion of one lamina and may be done either with a conventional open technique or in a minimalistic fashion with the use of tubular retractors and endoscopes.The reason for lamina removal is rarely, if ever, because the lamina itself is diseased; rather, it is done to break the continuity of the rigid ring of the spinal canal to allow the soft tissues within the canal to: 1) expand (decompress); 2) change the contour of the vertebral column; or 3) permit access to deeper tissue inside the spinal canal. A laminectomy is also the name of a spinal operation that conventionally includes the removal of one or both lamina, as well as other posterior supporting structures of the vertebral column, including ligaments and additional bone. The actual bone removal may be carried out with a variety of surgical tools, including drills, rongeurs and lasers.The success rate of a laminectomy depends on the specific reason for the operation, as well as proper patient selection and the surgeon's technical ability. The first laminectomy was performed in 1887 by Victor Alexander Haden Horsley, a professor of surgery at University College London. A laminectomy can treat severe spinal stenosis by relieving pressure on the spinal cord or nerve roots, provide access to a tumor or other mass lying in or around the spinal cord, or help in tailoring the contour of the vertebral column to correct a spinal deformity such as kyphosis. A common type of laminectomy is performed to permit the removal or reshaping of a spinal disc as part of a lumbar discectomy. This is a treatment for a herniated, bulging, or degenerated disc.The recovery period after a laminectomy depends on the specific operative technique, with minimally invasive procedures having significantly shorter recovery periods than open surgery. Removal of substantial amounts of bone and tissue may require additional procedures such as spinal fusion to stabilize the spine and generally require a much longer recovery period than a simple laminectomy. Method: With spinal fusion, the recovery time may be longer. In some cases after laminectomy and spinal fusion, it may take several months to return to normal activities. Potential complications include bleeding, infection, blood clots, nerve injury, and spinal fluid leak. For spinal stenosis: Most commonly, a laminectomy is performed to treat spinal stenosis. Spinal stenosis is the single most common diagnosis that leads to spinal surgery, of which a laminectomy represents one component. The lamina of the vertebra is removed or trimmed to widen the spinal canal and create more space for the spinal nerves and thecal sac. Surgical treatment that includes a laminectomy is the most effective remedy for severe spinal stenosis; however, most cases of spinal stenosis are not severe enough to require surgery. When the disabling symptoms of spinal stenosis are primarily neurogenic claudication and the laminectomy is done without spinal fusion, there is generally a more rapid recovery with less blood loss. However, if the spinal column is unstable and fusion is required, the recovery period can last from several months to more than a year, and the likelihood of symptom relief is far less probable. Results: In most known cases of lumbar and thoracic laminectomies, patients tend to recover slowly, with recurring pain or spinal stenosis persisting for up to 18 months after the procedure. According to a World Health Organization census in 2001, most patients who had undergone a lumbar laminectomy recovered normal function within one year of their operation. Results: Back surgery can relieve pressure on the spine, but it is not a cure-all for spinal stenosis. There may be considerable pain immediately after the operation, and pain may persist on a longer-term basis. For some people, recovery can take weeks or months and may require long-term occupational and physical therapy. Surgery does not stop the degenerative process and symptoms may reappear within several years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ōnusa** Ōnusa: An ōnusa (大幣) or simply nusa (幣) or Taima (大幣) is a wooden wand traditionally used in Shinto purification rituals.Ōnusa are decorated with a number of shide (paper streamers). When the shide are attached to a hexagonal or octagonal staff, the wand is also known as a haraegushi (祓串). The word Taima also refers to cannabis in the Japanese language. Nusa is an old word for cannabis.The Jingū Taima (神宮大麻) is a type of ōnusa. although they are often used in different ways than normal Onusa, usually kept in envelopes. Ōnusa: The most common type of Nusa today consists of a sakaki branch or a white wooden stick with a shide or Nusa ramie attached to the end. In Board of Ceremonies' "Jinja Matsuri Shiki" (1875), a branch of sakaki is used for the Nusa, and in Yatsuka Seinan's "Jinja Yushoku Kijitsu" (1951), Nusa is described as a sakaki branch with only ramie or, in addition, shidare attached, while konusa is made of wooden sticks, thin wood or bamboo. At Ise Jingu Shrine, mikisakaki, a sakaki branch with its leaves and branches still attached, is also used with Nusa attached to it, and a sakaki branch is attached to a cord of hemp as a yu (cotton). In some cases, such as at Kamogoso Shrine (Shimogamo Shrine), a branch of a peach tree is used, following the myth in the Kojiki.Nusa is also used in different ways. In the present day, it is shaken noisily as if to purify dust, but in ancient ceremonies such as at Kasuga Taisha, it is stroked. The same is true at Ise Shrine, where noisy purification is forbidden. Today, Nusa is used by waving it left, right, and left toward the person or object to be purified, which is believed to transfer impurities to the Nusa. In the past, it was left, right, and center.A Gohei is an onusa with only two Shide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanolithography** Nanolithography: Nanolithography (NL) is a growing field of techniques within nanotechnology dealing with the engineering (patterning e.g. etching, depositing, writing, printing etc) of nanometer-scale structures on various materials. The modern term reflects on a design of structures built in range of 10−9 to 10−6 meters, i.e. nanometer scale. Essentially, the field is a derivative of lithography, only covering very small structures. All NL methods can be categorized into four groups: photo lithography, scanning lithography, soft lithography and other miscellaneous techniques. History: The NL has evolved from the need to increase the number of sub-micrometer features (e.g. transistors, capacitors etc.) in an integrated circuit in order to keep up with Moore's Law. While lithographic techniques have been around since the late 18th century, none were applied to nanoscale structures until the mid-1950s. With evolution of the semiconductor industry, demand for techniques capable of producing micro- and nano-scale structures skyrocketed. Photolithography was applied to these structures for the first time in 1958 beginning the age of nanolithography.Since then, photolithography has become the most commercially successful technique, capable of producing sub-100 nm patterns. There are several techniques associated with the field, each designed to serve its many uses in the medical and semiconductor industries. Breakthroughs in this field contribute significantly to the advancement of nanotechnology, and are increasingly important today as demand for smaller and smaller computer chips increases. Further areas of research deal with physical limitations of the field, energy harvesting, and photonics. Etymology: From Greek, the word nanolithography can be broken up into three parts: "nano" meaning dwarf, "lith" meaning stone, and "graphy" meaning to write, or "tiny writing onto stone." Photo lithography: As of 2021 photolithography is the most heavily used technique in mass production of microelectronics and semiconductor devices. It's characterized by both high production throughput and small-sized features of the patterns. Photo lithography: Optical lithography Optical Lithography (or photolithography) is one of the most important and prevalent sets of techniques in the nanolithography field. Optical lithography contains several important derivative techniques, all that use very short light wavelengths in order to change the solubility of certain molecules, causing them to wash away in solution, leaving behind a desired structure. Several optical lithography techniques require the use of liquid immersion and a host of resolution enhancement technologies like phase-shift masks (PSM) and optical proximity correction (OPC). Some of the included techniques in this set include multiphoton lithography, X-Ray lithography, light coupling nanolithography (LCM), and extreme ultraviolet lithography (EUVL). This last technique is considered to be the most important next generation lithography (NGL) technique due to its ability to produce structures accurately down below 30 nanometers at high throughput rates which makes it a viable option for commercial purposes. Photo lithography: Quantum optical lithography Quantum optical lithography (QOL), is a diffraction-unlimited method able to write at 1 nm resolution by optical means, using a red laser diode (λ = 650nm). Complex patterns like geometrical figures and letters were obtained at 3 nm resolution on resist substrate. The method was applied to nanopattern graphene at 20 nm resolution. Scanning lithography: Electron-beam lithography Electron beam lithography (EBL) or electron-beam direct-write lithography (EBDW) scans a focused beam of electrons on a surface covered with an electron-sensitive film or resist (e.g. PMMA or HSQ) to draw custom shapes. By changing the solubility of the resist and subsequent selective removal of material by immersion in a solvent, sub-10 nm resolutions have been achieved. This form of direct-write, maskless lithography has high resolution and low throughput, limiting single-column e-beams to photomask fabrication, low-volume production of semiconductor devices, and research and development. Multiple-electron beam approaches have as a goal an increase of throughput for semiconductor mass-production. Scanning lithography: EBL can be utilized for selective protein nanopatterning on a solid substrate, aimed for ultrasensitive sensing. Scanning lithography: Scanning probe lithography Scanning probe lithography (SPL) is another set of techniques for patterning at the nanometer-scale down to individual atoms using scanning probes, either by etching away unwanted material, or by directly-writing new material onto a substrate. Some of the important techniques in this category include dip-pen nanolithography, thermochemical nanolithography, thermal scanning probe lithography, and local oxidation nanolithography. Dip-pen nanolithography is the most widely used of these techniques. Scanning lithography: Proton beam writing This technique uses a focused beam of high energy (MeV) protons to pattern resist material at nanodimensions and has been shown to be capable of producing high-resolution patterning well below the 100 nm mark. Charged-particle lithography This set of techniques include ion- and electron-projection lithographies. Ion beam lithography uses a focused or broad beam of energetic lightweight ions (like He+) for transferring pattern to a surface. Using Ion Beam Proximity Lithography (IBL) nano-scale features can be transferred on non-planar surfaces. Soft lithography: Soft lithography uses elastomer materials made from different chemical compounds such as polydimethylsiloxane. Elastomers are used to make a stamp, mold, or mask (akin to photomask) which in turn is used to generate micro patterns and microstructures. The techniques described below are limited to one stage. The consequent patterning on the same surfaces is difficult due to misalignment problems. The soft lithography isn't suitable for production of semiconductor-based devices as it's not complementary for metal deposition and etching. The methods are commonly used for chemical patterning. Soft lithography: PDMS lithography Microcontact printing Multilayer soft lithography Miscellaneous techniques: Nanoimprint lithography Nanoimprint lithography (NIL), and its variants, such as Step-and-Flash Imprint Lithography and laser assisted directed imprint (LADI) are promising nanopattern replication technologies where patterns are created by mechanical deformation of imprint resists, typically monomer or polymer formations that are cured by heat or UV light during imprinting. This technique can be combined with contact printing and cold welding. Nanoimprint lithography is capable of producing patterns at sub-10 nm levels. Miscellaneous techniques: Magnetolithography Magnetolithography (ML) is based on applying a magnetic field on the substrate using paramagnetic metal masks call "magnetic mask". Magnetic mask which is analog to photomask define the spatial distribution and shape of the applied magnetic field. The second component is ferromagnetic nanoparticles (analog to the Photoresist) that are assembled onto the substrate according to the field induced by the magnetic mask. Miscellaneous techniques: Nanofountain drawing A nanofountain probe is a micro-fluidic device similar in concept to a fountain pen which deposits a narrow track of chemical from a reservoir onto the substrate according to the movement pattern programmed. Nanosphere lithography Nanosphere lithography uses self-assembled monolayers of spheres (typically made of polystyrene) as evaporation masks. This method has been used to fabricate arrays of gold nanodots with precisely controlled spacings. Neutral particle lithography Neutral particle lithography (NPL) uses a broad beam of energetic neutral particle for pattern transfer on a surface. Plasmonic lithography Plasmonic lithography uses surface plasmon excitations to generate beyond-diffraction limit patterns, benefiting from subwavelength field confinement properties of surface plasmon polaritons. Stencil lithography Stencil lithography is a resist-less and parallel method of fabricating nanometer scale patterns using nanometer-size apertures as shadow-masks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dyslalia** Dyslalia: Dyslalia means difficulties in talking due to structural defects in speech organs, such as sigmatism (defective pronunciation of sibilant sounds, for example "S" pronounced as "TH") and rhotacism, in which the letter "R" pronounced as "I or Y". It does not include speech impairment due to neurological or other factors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**11β-Hydroxyprogesterone** 11β-Hydroxyprogesterone: 11β-Hydroxyprogesterone (11β-OHP), also known as 21-deoxycorticosterone, as well as 11β-hydroxypregn-4-ene-3,20-dione, is a naturally occurring, endogenous steroid and derivative of progesterone. It is a potent mineralocorticoid. Syntheses of 11β-OHP from progesterone is catalyzed by the steroid 11β-hydroxylase (CYP11B1) enzyme, and, to a lesser extent, by the aldosterone synthase enzyme (CYP11B2). Function: Along with its epimer 11α-hydroxyprogesterone (11α-OHP), 11β-OHP has been identified as a very potent competitive inhibitor of both isoforms (1 and 2) of 11β-hydroxysteroid dehydrogenase (11β-HSD). Outcome of 21-hydroxylase deficiency: It has been known since 1987 that increased levels of 11β-OHP occur in 21-hydroxylase deficiency. A study in 2017 has shown that in subjects with 21-hydroxylase deficiency, serum 11β-OHP concentrations range from 0.012 to 3.37 ng/mL, while in control group it was below detection limit of 0.012 ng/mL. 21-hydroxylase is an enzyme that is also involved in progesterone metabolism, producing 11-deoxycorticosterone. In normal conditions, 21-hydroxylase has higher activity on progesterone than steroid 11β-hydroxylase (CYP11B1) and aldosterone synthase (CYP11B2) that convert progesterone to 11β-OHP. That's why in 21-hydroxylase deficiency, given the normal function of the CYP11B enzymes, the progesterone is directed towards 11β-OHP pathway rather than towards 11-deoxycorticosterone pathway, that is also usually accompanied by an increase in progesterone levels. In the normal route to aldosterone and cortisol, progesterone and 17α-hydroxyprogesterone are first hydroxylated at position 21 and then hydroxylated at other positions. In 21-hydroxylase deficiency, progesterone and 17α-hydroxyprogesterone accumulate and are the substrates of steroid 11β-hydroxylase, leading to 1β-OHP and 21-deoxycortisol, respectively. In the 2017 study above mentioned, serum progesterone concentrations in boys (10 days to 18 years old) with 21-hydroxylase deficiency reached levels similar to female luteal values (up to 10.14 ng/mL, depending on severity and treatment), while in the control group of boys progesterone was 0.07 ng/mL (0.22 nmol/L) on average, ranged from 0.05 to 0.40 ng/mL.While studies suggest that 11β-OHP, also known as 21-deoxycorticosterone, can be used as marker for adrenal 21-hydroxylase deficiency, another 21-carbon steroid — 21-deoxycortisol (produced from 17α-hydroxyprogesterone) gained acceptance for this purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2,3-dihydroxy-2,3-dihydro-p-cumate dehydrogenase** 2,3-dihydroxy-2,3-dihydro-p-cumate dehydrogenase: In enzymology, a 2,3-dihydroxy-2,3-dihydro-p-cumate dehydrogenase (EC 1.3.1.58) is an enzyme that catalyzes the chemical reaction cis-5,6-dihydroxy-4-isopropylcyclohexa-1,3-dienecarboxylate + NAD+ ⇌ 2,3-dihydroxy-p-cumate + NADH + H+Thus, the two substrates of this enzyme are cis-5,6-dihydroxy-4-isopropylcyclohexa-1,3-dienecarboxylate and NAD+, whereas its 3 products are 2,3-dihydroxy-p-cumate, NADH, and H+. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-CH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is cis-2,3-dihydroxy-2,3-dihydro-p-cumate:NAD+ oxidoreductase. This enzyme participates in biphenyl degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deferiprone** Deferiprone: Deferiprone, sold under the brand name Ferriprox among others, is a medication that chelates iron and is used to treat iron overload in thalassaemia major. It was first approved and indicated for use in treating thalassaemia major in 1994 and had been licensed for use in the European Union for many years while awaiting approval in Canada and in the United States. On October 14, 2011, it was approved for use in the US under the FDA's accelerated approval program.The most common side effects include red-brown urine (showing that iron is being removed through the urine), nausea (feeling sick), abdominal pain (stomach ache) and vomiting. Less common but more serious side effects are agranulocytosis (very low levels of granulocytes, a type of white blood cell) and neutropenia (low levels of neutrophils, a type of white blood cell that fights infections). Medical uses: Deferiprone monotherapy is indicated in the European Union for the treatment of iron overload in those with thalassaemia major when current chelation therapy is contraindicated or inadequate.Deferiprone in combination with another chelator is indicated in the European Union in those with thalassaemia major when monotherapy with any iron chelator is ineffective, or when prevention or treatment of life-threatening consequences of iron overload (mainly cardiac overload) justifies rapid or intensive correction.The researchers found that the oral drug, deferiprone, reactivates the “altruistic suicide response” of an HIV-infected cell, killing the HIV DNA it carries. Effective suppression of HIV-1 generation and induction of apoptosis both require deferiprone at a concentration around 150 μM in infected T-cell lines. Since a 0.5 log10 decrement in HIV-1 RNA corresponds to an additional 2 years of AIDS-free survival and a 0.3 log10 decrement reduces the annual risk of progression to AIDS-related death by 25%, the measurements suggested biological significance. Controversy: Deferiprone was at the center of a protracted struggle between Nancy Olivieri, a Canadian haematologist and researcher, and the Hospital for Sick Children and the pharmaceutical company Apotex, that started in 1996, and delayed approval of the drug in North America. Olivieri's data suggested deferiprone leads to progressive hepatic fibrosis. History: Deferiprone was approved for medical use in the European Union in August 1999.It was approved for medical use in the United States in October 2011. Generic versions were approved in August 2019.The safety and effectiveness of deferiprone is based on an analysis of data from twelve clinical studies in 236 participants. Participants in the study did not respond to prior iron chelation therapy. Deferiprone was considered a successful treatment for participants who experienced at least a 20 percent decrease in serum ferritin, a protein that stores iron in the body for later use. Half of the participants in the study experienced at least a 20 percent decrease in ferritin levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semiconductor memory** Semiconductor memory: Semiconductor memory is a digital electronic semiconductor device used for digital data storage, such as computer memory. It typically refers to devices in which data is stored within metal–oxide–semiconductor (MOS) memory cells on a silicon integrated circuit memory chip. There are numerous different types using different semiconductor technologies. The two main types of random-access memory (RAM) are static RAM (SRAM), which uses several transistors per memory cell, and dynamic RAM (DRAM), which uses a transistor and a MOS capacitor per cell. Non-volatile memory (such as EPROM, EEPROM and flash memory) uses floating-gate memory cells, which consist of a single floating-gate transistor per cell. Semiconductor memory: Most types of semiconductor memory have the property of random access, which means that it takes the same amount of time to access any memory location, so data can be efficiently accessed in any random order. This contrasts with data storage media such as hard disks and CDs which read and write data consecutively and therefore the data can only be accessed in the same sequence it was written. Semiconductor memory also has much faster access times than other types of data storage; a byte of data can be written to or read from semiconductor memory within a few nanoseconds, while access time for rotating storage such as hard disks is in the range of milliseconds. For these reasons it is used for primary storage, to hold the program and data the computer is currently working on, among other uses. Semiconductor memory: As of 2017, semiconductor memory chips sell $124 billion annually, accounting for 30% of the semiconductor industry. Shift registers, processor registers, data buffers and other small digital registers that have no memory address decoding mechanism are typically not referred to as memory although they also store digital data. Description: In a semiconductor memory chip, each bit of binary data is stored in a tiny circuit called a memory cell consisting of one to several transistors. The memory cells are laid out in rectangular arrays on the surface of the chip. The 1-bit memory cells are grouped in small units called words which are accessed together as a single memory address. Memory is manufactured in word length that is usually a power of two, typically N=1, 2, 4 or 8 bits. Description: Data is accessed by means of a binary number called a memory address applied to the chip's address pins, which specifies which word in the chip is to be accessed. If the memory address consists of M bits, the number of addresses on the chip is 2M, each containing an N bit word. Consequently, the amount of data stored in each chip is N2M bits. The memory storage capacity for M number of address lines is given by 2M, which is usually in power of two: 2, 4, 8, 16, 32, 64, 128, 256 and 512 and measured in kilobits, megabits, gigabits or terabits, etc. As of 2014 the largest semiconductor memory chips hold a few gigabits of data, but higher capacity memory is constantly being developed. By combining several integrated circuits, memory can be arranged into a larger word length and/or address space than what is offered by each chip, often but not necessarily a power of two.The two basic operations performed by a memory chip are "read", in which the data contents of a memory word is read out (nondestructively), and "write" in which data is stored in a memory word, replacing any data that was previously stored there. To increase data rate, in some of the latest types of memory chips such as DDR SDRAM multiple words are accessed with each read or write operation. Description: In addition to standalone memory chips, blocks of semiconductor memory are integral parts of many computer and data processing integrated circuits. For example, the microprocessor chips that run computers contain cache memory to store instructions awaiting execution. Types: Volatile memory Volatile memory loses its stored data when the power to the memory chip is turned off. However it can be faster and less expensive than non-volatile memory. This type is used for the main memory in most computers, since data is stored on the hard disk while the computer is off. Major types are:RAM (Random-access memory) – This has become a generic term for any semiconductor memory that can be written to, as well as read from, in contrast to ROM (below), which can only be read. All semiconductor memory, not just RAM, has the property of random access. Types: DRAM (Dynamic random-access memory) – This uses memory cells consisting of one MOSFET (MOS field-effect transistor) and one MOS capacitor to store each bit. This type of RAM is the cheapest and highest in density, so it is used for the main memory in computers. However, the electric charge that stores the data in the memory cells slowly leaks out, so the memory cells must be periodically refreshed (rewritten) which requires additional circuitry. The refresh process is handled internally by the computer and is transparent to its user. Types: FPM DRAM (Fast page mode DRAM) – An older type of asynchronous DRAM that improved on previous types by allowing repeated accesses to a single "page" of memory to occur at a faster rate. Used in the mid-1990s. EDO DRAM (Extended data out DRAM) – An older type of asynchronous DRAM which had faster access time than earlier types by being able to initiate a new memory access while data from the previous access was still being transferred. Used in the later part of the 1990s. VRAM (Video random access memory) – An older type of dual-ported memory once used for the frame buffers of video adapters (video cards). Types: SDRAM (Synchronous dynamic random-access memory) – This added circuitry to the DRAM chip which synchronizes all operations with a clock signal added to the computer's memory bus. This allowed the chip to process multiple memory requests simultaneously using pipelining, to increase the speed. The data on the chip is also divided into banks which can each work on a memory operation simultaneously. This became the dominant type of computer memory by about the year 2000. Types: DDR SDRAM (Double data rate SDRAM) – This could transfer twice the data (two consecutive words) on each clock cycle by double pumping (transferring data on both the rising and falling edges of the clock pulse). Extensions of this idea are the current (2012) technique being used to increase memory access rate and throughput. Since it is proving difficult to further increase the internal clock speed of memory chips, these chips increase the transfer rate by transferring more data words on each clock cycle DDR2 SDRAM – Transfers 4 consecutive words per internal clock cycle DDR3 SDRAM – Transfers 8 consecutive words per internal clock cycle. Types: DDR4 SDRAM – Transfers 16 consecutive words per internal clock cycle. RDRAM (Rambus DRAM) – An alternate double data rate memory standard that was used on some Intel systems but ultimately lost out to DDR SDRAM. XDR DRAM (Extreme data rate DRAM) SGRAM (Synchronous graphics RAM) – A specialized type of SDRAM made for graphics adaptors (video cards). It can perform graphics-related operations such as bit masking and block write, and can open two pages of memory at once. GDDR SDRAM (Graphics DDR SDRAM) GDDR2 GDDR3 SDRAM GDDR4 SDRAM GDDR5 SDRAM GDDR6 SDRAM HBM (High Bandwidth Memory) – A development of SDRAM used in graphics cards that can transfer data at a faster rate. It consists of multiple memory chips stacked on top of one another, with a wider data bus. PSRAM (Pseudostatic RAM) – This is DRAM which has circuitry to perform memory refresh on the chip, so that it acts like SRAM, allowing the external memory controller to be shut down to save energy. It is used in a few game consoles such as the Wii. SRAM (Static random-access memory) – This stores each bit of data in a circuit called a flip-flop, made of 4 to 6 transistors. SRAM is less dense and more expensive per bit than DRAM, but faster and does not require memory refresh. It is used for smaller cache memories in computers. CAM (Content-addressable memory) – This is a specialized type in which, instead of accessing data using an address, a data word is applied and the memory returns the location if the word is stored in the memory. It is mostly incorporated in other chips such as microprocessors where it is used for cache memory. Types: Non-volatile memory Non-volatile memory (NVM) preserves the data stored in it during periods when the power to the chip is turned off. Therefore, it is used for the memory in portable devices, which don't have disks, and for removable memory cards among other uses. Major types are: ROM (Read-only memory) – This is designed to hold permanent data, and in normal operation is only read from, not written to. Although many types can be written to, the writing process is slow and usually all the data in the chip must be rewritten at once. It is usually used to store system software which must be immediately accessible to the computer, such as the BIOS program which starts the computer, and the software (microcode) for portable devices and embedded computers such as microcontrollers. Types: MROM (Mask programmed ROM or Mask ROM) – In this type the data is programmed into the chip when the chip is manufactured, so it is only used for large production runs. It cannot be rewritten with new data. PROM (Programmable read-only memory) – In this type the data is written into an existing PROM chip before it is installed in the circuit, but it can only be written once. The data is written by plugging the chip into a device called a PROM programmer. Types: EPROM (Erasable programmable read-only memory or UVEPROM) – In this type the data in it can be rewritten by removing the chip from the circuit board, exposing it to an ultraviolet light to erase the existing data, and plugging it into a PROM programmer. The IC package has a small transparent "window" in the top to admit the UV light. It is often used for prototypes and small production run devices, where the program in it may have to be changed at the factory. EEPROM (Electrically erasable programmable read-only memory) – In this type the data can be rewritten electrically, while the chip is on the circuit board, but the writing process is slow. This type is used to hold firmware, the low level microcode which runs hardware devices, such as the BIOS program in most computers, so that it can be updated. Types: NVRAM (Non-volatile random-access memory) FRAM (Ferroelectric RAM) – One type of nonvolatile RAM. Types: Flash memory – In this type the writing process is intermediate in speed between EEPROMS and RAM memory; it can be written to, but not fast enough to serve as main memory. It is often used as a semiconductor version of a hard disk, to store files. It is used in portable devices such as PDAs, USB flash drives, and removable memory cards used in digital cameras and cellphones. History: Early computer memory consisted of magnetic-core memory, as early solid-state electronic semiconductors, including transistors such as the bipolar junction transistor (BJT), were impractical for use as digital storage elements (memory cells). The earliest semiconductor memory dates back to the early 1960s, with bipolar memory, which used bipolar transistors. Bipolar semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. The same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first single-chip memory IC was the BJT 16-bit IBM SP95 fabricated in December 1965, engineered by Paul Castrucci. While bipolar memory offered improved performance over magnetic-core memory, it could not compete with the lower price of magnetic-core memory, which remained dominant up until the late 1960s. Bipolar memory failed to replace magnetic-core memory because bipolar flip-flop circuits were too large and expensive. History: MOS memory The advent of the metal–oxide–semiconductor field-effect transistor (MOSFET), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS memory was cheaper and consumed less power than magnetic-core memory. This led to MOSFETs eventually replacing magnetic cores as the standard storage elements in computer memory.In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.The term "memory" when used with reference to computers most often refers to volatile random-access memory (RAM). The two main types of volatile RAM are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95.Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance over magnetic-core memory, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM memory cell, based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103, in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992.The term "memory" is also often used to refer to non-volatile memory, specifically flash memory. It has origins in read-only memory (ROM). Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable read-only memory (ROM), which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at Japan's Ministry of International Trade and Industry (MITI) Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1888 Boston Beaneaters season** 1888 Boston Beaneaters season: The 1888 Boston Beaneaters season was the 18th season of the franchise. Regular season: Season standings Record vs. opponents Roster Player stats: Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Third umpire** Third umpire: The third umpire (or TV Umpire) is an off-field umpire used in some cricket matches, particularly international matches. Their role is to make the final decision in questions referred to them by the two on-field umpires or the players. The third umpire is also there to act as an emergency on-field umpire if required. History: The third umpire was conceptualized by former Sri Lankan domestic cricketer, and current cricket writer Mahinda Wijesinghe. History: It debuted in Test cricket in November 1992 at Kingsmead, Durban for the South Africa vs. India series. Karl Liebenberg was the third umpire with Cyril Mitchley the on-field umpire, referring the run-out decision in this match. Sachin Tendulkar became the first batsman to be dismissed (run out) by using television replays in the second day of the Test scoring 11. Appointment: The third umpire is appointed from the Elite Panel of ICC Umpires or the International Panel of ICC Umpires for Test matches, ODIs, and T20Is.For all Test matches, and for ODIs where DRS is used, the third umpire is appointed by the ICC, and is a different nationality to the two sides. For ODIs where DRS is not used, and for all T20Is, the third umpire is appointed by the home side's Governing body. Functions: Decision requests An on-field umpire can, at his own discretion, use a radio link to refer particular types of close decision to the third umpire, this is called an Umpire Review. When the full Umpire Decision Review System is not in use, the third umpire uses television replays (only) to assist him in coming to a decision.When the full DRS is in use, players can also initiate reviews of particular decisions by the on-field umpires, this is called a Player Review. These are judged by the third umpire, and the third umpire has the full range of technology available beyond simple replays, for both Umpire Reviews and Player Reviews. Functions: Emergency on-field umpire In the case of injury or illness to one of the on-field umpires, the third umpire will take his place. The third umpire duties will then be taken on by the fourth umpire.For example, during the 4th ODI between Australia and India at Canberra in 2015–16, umpire Richard Kettleborough was injured during Australia's innings and was replaced by third umpire Paul Wilson.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gates** Gates: Gates is the plural of gate, a point of entry to a space which is enclosed by walls. It may also refer to: People: Gates (surname), various people with the last name Gates Brown (1939-2013), American Major League Baseball player Gates McFadden (born 1949), American actress and choreographer Gates P. Thruston (1835-1912), American Civil War veteran, lawyer and businessman Places: Canada Gates, British Columbia, Canada, a rural community Gates River, a river in British Columbia Gates Valley, a valley in British Columbia Gates Lake, at the head of the Gates River United States Gates, Nebraska, an unincorporated community Gates, New York, a town Gates (CDP), New York, census-designated place Gates, Oregon, a city Gates, Tennessee, a town Gates County, North Carolina, United States Gates, North Carolina, an unincorporated community in the county Gates Pass, Arizona, a mountain pass Art and entertainment: Gates (character), a fictional character, an insectoid member of the Legion of Super-Heroes Gates (band), a post rock band from New Jersey Gates (TV series), a 2012 UK situation comedy TV series The Gates (TV series), a 2010 American supernatural crime drama television series on ABC The Gates, an art installation by Christo and Jeanne-Claude in Central Park in New York City Other uses: Gates Airport (disambiguation) Gates Bar-B-Q, a restaurant in Kansas City, Missouri Gates Corporation, a manufacturer of power transmission belts and fluid power products
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LCFG** LCFG: LCFG stands for "Local ConFiGuration system". Developed at the University of Edinburgh beginning around 1993, it is "a system for automatically installing and managing the configuration of large numbers of computer systems. It is particularly suitable for sites with very diverse and rapidly changing configurations". System architecture: The configuration of the entire site is described in a set of source files, held on a central server. Note that one source file does not (necessarily) correspond to one machine; a source file typically describes one aspect of the overall configuration, such as "parameters specific to student machines", or "parameters specific to Scientific Linux machines". An individual parameter may be affected by more than one source file. System architecture: The source files are compiled into individual profiles. One profile corresponds to one machine, and the profile contains all the configuration parameters necessary to recreate the configuration of the target machine. The profiles are published on a web server. When a profile changes, the corresponding client is sent a simple UDP notification. The client retrieves the profile using HTTP, and caches the parameters in a DBM file. Clients normally poll periodically for new configurations in case the notification has been missed. Clients periodically send a simple UDP acknowledgement to the server. These are collated to generate a web page showing status information for all the clients. System architecture: Component scripts on the client are responsible for reading the configuration parameters and taking the appropriate actions necessary to implement the configuration; usually this involves generating configuration files from the parameters in the profile. Components are notified when a new configuration is received which involves a change to some parameter of that component. The component regenerates any necessary configuration files, and notifies any associated daemons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dotless J** Dotless J: ȷ is a modified letter of the Latin alphabet, obtained by writing the lowercase letter j without a dot. Dotless j was formerly used in Karelian to mark palatalisation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium succinate** Lithium succinate: Lithium succinate (C4H4Li2O4), the dilithium salt of succinic acid, is a drug used in the treatment of seborrhoeic dermatitis and proposed for the treatment of anogenital warts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylene bridge** Methylene bridge: In organic chemistry, a methylene bridge, methylene spacer, or methanediyl group is any part of a molecule with formula −CH2−; namely, a carbon atom bound to two hydrogen atoms and connected by single bonds to two other distinct atoms in the rest of the molecule. It is the repeating unit in the skeleton of the unbranched alkanes. Methylene bridge: A methylene bridge can also act as a bidentate ligand joining two metals in a coordination compound, such as titanium and aluminum in Tebbe's reagent.A methylene bridge is often called a methylene group or simply methylene, as in "methylene chloride" (dichloromethane CH2Cl2). As a bridge in other compounds, for example in cyclic compounds, it is given the name methano. However, the term methylene group (or "methylidene") properly applies to the CH2 group when it is connected to the rest of the molecule by a double bond (=CH2), giving it chemical properties very distinct from those of a bridging CH2 group. Reactions: Compounds possessing a methylene bridge located between two strong electron withdrawing groups (such as nitro, carbonyl or nitrile groups) are sometimes called active methylene compounds. Treatment of these with strong bases can form enolates or carbanions, which are often used in organic synthesis. Examples include the Knoevenagel condensation and the malonic ester synthesis. Examples: Examples of compounds which contain methylene bridges include:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kazoo** Kazoo: The kazoo is an American musical instrument that adds a "buzzing" timbral quality to a player's voice when the player vocalizes into it. It is a type of mirliton (which itself is a membranophone), one of a class of instruments which modifies its player's voice by way of a vibrating membrane of goldbeater's skin or material with similar characteristics. Playing: A kazoo player hums, rather than blows, into the bigger and flattened side of the instrument. The oscillating air pressure of the hum makes the kazoo's membrane vibrate. The resulting sound varies in pitch and loudness with the player's humming. Players can produce different sounds by singing specific syllables such as doo, too, who, rrrrr or brrr into the kazoo. History: Simple membrane instruments played by vocalizing, such as the onion flute, have existed since at least the 16th century. It is claimed that Alabama Vest, an African-American in Macon, Georgia, invented the kazoo around 1840, although there is no documentation to support that claim. The story originated with the Kaminsky International Kazoo Quartet, a group of satirical kazoo players, which may cast doubt on the veracity of the story, as does the name "Alabama Vest" itself. History: In 1879, Simon Seller received a patent for a "Toy Trumpet" that worked on the same principle as a kazoo: "By blowing through the tube A, and at the same time humming a sort of a head sound, a musical vibration is given to the paper covering c over the aperture b, and a sound produced pleasing to the ear." Seller's "toy trumpet" was basically a hollow sheet-metal tube, with a rectangular aperture cut out along the length of the tube, with paper covering the aperture, and a funnel at the end, like the bell of a trumpet. The first documented appearance of a kazoo was that created by an American inventor, Warren Herbert Frost, who named his new musical instrument kazoo in his patent #270,543 issued on January 9, 1883. The patent states, "This instrument or toy, to which I propose to give the name 'kazoo' "..." Frost's kazoo did not have the streamlined, submarine shape of modern kazoos, but it was similar in that the aperture was circular and elevated above the length of the tube. The modern kazoo—also the first one made of metal—was patented by George D. Smith of Buffalo, New York, May 27, 1902.In 1916, the Original American Kazoo Company in Eden, New York started manufacturing kazoos for the masses in a two-room shop and factory, utilizing a couple of dozen jack presses for cutting, bending and crimping metal sheets. These machines were used for many decades. By 1994, the company produced 1.5 million kazoos per year and was the only manufacturer of metal kazoos in North America. The factory, in nearly its original configuration, is now called The Kazoo Factory and Museum. It is still operating, and it is open to the public for tours.In 2010, The Kazoo Museum opened in Beaufort, South Carolina with exhibits on kazoo history. Professional usage: The kazoo is played professionally in jug bands and comedy music, and by amateurs everywhere. It is among the acoustic instruments developed in the United States, and one of the easiest melodic instruments to play, requiring only the ability to vocalize in tune. In North East England and South Wales, kazoos play an important role in juvenile jazz bands. During Carnival, players use kazoos in the Carnival of Cádiz in Spain and in the corsos on the murgas in Uruguay. Professional usage: In the Original Dixieland Jass Band 1921 recording of Crazy Blues, what the casual listener might mistake for a trombone solo is actually a kazoo solo by drummer Tony Sbarbaro. Professional usage: Red McKenzie played kazoo in a Mound City Blue Blowers 1929 film short. The Mound City Blue Blowers had a number of hit kazoo records in the early 1920s featuring Dick Slevin on metal kazoo and Red McKenzie on comb and tissue paper (although McKenzie also played metal kazoo). The vocaphone, a kind of kazoo with a trombone-like tone, was occasionally featured in Paul Whiteman's Orchestra. Trombonist-vocalist Jack Fulton played it on Whiteman's recording of Vilia (1931) and Frankie Trumbauer's Medley of Isham Jones Dance Hits (1932). The Mills Brothers vocal group originally started in vaudeville as a kazoo quartet, playing four-part harmony on kazoo with one brother accompanying them on guitar.The kazoo is rare in European classical music. It does appear in David Bedford's With 100 Kazoos, where, rather than having professionals play the instrument, kazoos are handed out to the audience, who accompany a professional instrumental ensemble. Leonard Bernstein included a segment for kazoo ensemble in the First Introit (Rondo) of his Mass. The kazoo was used in the 1990 Koch International and 2007 Naxos Records recordings of American classical composer Charles Ives' Yale-Princeton Football Game, where the kazoo chorus represents the football crowd's cheering. The brief passages have the kazoo chorus sliding up and down the scale as the "cheering" rises and falls. Professional usage: In Frank Loesser's score for the 1961 Broadway musical comedy How to Succeed in Business Without Really Trying, several kazoos produce the effect of electric razors used in the executive washroom during a dance reprise of the ballad I Believe in You. Professional usage: In 1961 Del Shannon's "So Long Baby" issued on Big Top Records featured a kazoo on the instrumental break. In addition to the single release it featured on the UK London American release of his album Hats Off To Del Shannon. Joanie Sommers' 1962 hit single "Johnny Get Angry" featured a kazoo ensemble in its instrumental bridge, as did Dion's hit of the same year, "Little Diane", and Ringo Starr's 1973 cover of "You're Sixteen". Professional usage: Jesse Fuller's 1962 recording of his song "San Francisco Bay Blues" features a kazoo solo, as does Eric Clapton's 1992 recording of the song on MTV's Unplugged television show and album. On the song "Alligator" on the Grateful Dead album Anthem of the Sun, three members of the band play kazoo together. Many Paolo Conte performances include kazoo passages. Professional usage: Short kazoo performances appear on many modern recordings, usually for comic effect. For example, in his first album, Freak Out!, Frank Zappa used the kazoo to add comic feel to some songs — including one of his best known, "Hungry Freaks, Daddy". In the song "Crosstown Traffic" from the album Electric Ladyland, Jimi Hendrix used a comb-and-paper instrument to accompany the guitar and accentuate a blown-out speaker sound. The song "Lovely Rita", from the Beatles album Sgt. Pepper's Lonely Hearts Club Band, uses combs-and-paper instruments. Kazoo playing parodied the sound of a military brass band in the Pink Floyd song "Corporal Clegg".In the McGuinness Flint recording When I'm Dead and Gone, Benny Gallagher and Graham Lyle play kazoos in harmony during the instrumental break. The New Seekers' live track (Ever Since You Told Me That You Loved Me) I'm A Nut features a kazoo solo by singer Eve Graham. British singer-songwriter Ray Dorset, the leader of pop-blues band Mungo Jerry, played the kazoo on many of his band's recordings, as did former member Paul King. Professional usage: One of the best known kazooists of recent times is Barbara Stewart (1941–2011). Stewart, a classically trained singer, wrote a book on the kazoo, formed the "quartet" Kazoophony, performed kazoo at Carnegie Hall and on the Late Night with Conan O'Brien television show. The steampunk band Steam Powered Giraffe has audience members play kazoos at some of their concerts. They also sell Kazookaphones, a standard kazoo with optional bugle horn and phonograph. Professional usage: The kazoo is used regularly on the BBC Radio 4 comedy panel game show I'm Sorry I Haven't a Clue, often paired with the swanee whistle in a musical round called "Swanee-Kazoo". The soundtrack of the film Chicken Run, released in 2000 and composed by John Powell and Harry Gregson-Williams, makes use of kazoos in several pieces.The theme songs of the children's cartoons Foster's Home for Imaginary Friends, Little Princess, and OOglies feature this instrument heavily. The video game Yoshi's New Island, released in 2014, has synthesized kazoos in several tracks of its soundtrack.The video game Plants vs. Zombies 2, has medieval-styled kazoos in Dark Ages levels. The American glam metal band Steel Panther released in December 2014 the Christmas track "The Stocking Song", which includes a kazoo hook from Deck the Halls. The Australian psychedelic rock band Tame Impala released a single in 2009 "Sundown Syndrome", which includes kazoo rhythmic part. The Swedish rock band Ghost have performed live acoustic renditions of their song "Ghuleh/Zombie Queen", which features the kazoo in place of the recorded version's keyboards. Professional usage: The Ukrainian polka band Los Colorados released a cover of Rammstein song "Du Hast", which features a kazoo.In November 2010, Sandra Boynton produced and released a full-length 300-kazoo plus orchestra performance of Maurice Ravel's Bolero, titled Boléro Completely Unraveled, performed by the Highly Irritating Orchestra. Boynton played solo kazoo on this recording noting "I am at the perfect level of musical incompetence for this." Records: On March 14, 2011, the audience at BBC Radio 3's Red Nose Show at the Royal Albert Hall, along with a star-studded kazoo band, set a new Guinness World Record for Largest Kazoo Ensemble. The 3,910 kazooists played Wagner's Ride of the Valkyries and the Dambusters March. This surpassed the previous record of 3,861 players, set in Sydney, Australia, in 2009. The current record of 5,190 was set later the same night in a second attempt.On August 9, 2010, the San Francisco Giants hosted a Jerry Garcia tribute night, in which an ensemble of an estimated 9,000 kazooists played "Take Me Out to the Ball Game."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Projective harmonic conjugate** Projective harmonic conjugate: In projective geometry, the harmonic conjugate point of a point on the real projective line with respect to two other points is defined by the following construction: Given three collinear points A, B, C, let L be a point not lying on their join and let any line through C meet LA, LB at M, N respectively. If AN and BM meet at K, and LK meets AB at D, then D is called the harmonic conjugate of C with respect to A and B.The point D does not depend on what point L is taken initially, nor upon what line through C is used to find M and N. This fact follows from Desargues theorem. Projective harmonic conjugate: In real projective geometry, harmonic conjugacy can also be defined in terms of the cross-ratio as (A, B; C, D) = −1. Cross-ratio criterion: The four points are sometimes called a harmonic range (on the real projective line) as it is found that D always divides the segment AB internally in the same proportion as C divides AB externally. That is: |AC|:|BC|=|AD|:|DB|. If these segments are now endowed with the ordinary metric interpretation of real numbers they will be signed and form a double proportion known as the cross ratio (sometimes double ratio) (A,B;C,D)=|AC||AD|/|BC|−|DB|, for which a harmonic range is characterized by a value of −1. We therefore write: 1. Cross-ratio criterion: The value of a cross ratio in general is not unique, as it depends on the order of selection of segments (and there are six such selections possible). But for a harmonic range in particular there are just three values of cross ratio: {−1, 1/2, 2}, since −1 is self-inverse – so exchanging the last two points merely reciprocates each of these values but produces no new value, and is known classically as the harmonic cross-ratio. Cross-ratio criterion: In terms of a double ratio, given points a and b on an affine line, the division ratio of a point x is t(x)=x−ax−b. Note that when a < x < b, then t(x) is negative, and that it is positive outside of the interval. The cross-ratio (c,d;a,b)=t(c)t(d) is a ratio of division ratios, or a double ratio. Setting the double ratio to minus one means that when t(c) + t(d) = 0, then c and d are harmonic conjugates with respect to a and b. So the division ratio criterion is that they be additive inverses. Harmonic division of a line segment is a special case of Apollonius' definition of the circle. In some school studies the configuration of a harmonic range is called harmonic division. Of midpoint: When x is the midpoint of the segment from a to b, then 1. By the cross-ratio criterion, the harmonic conjugate of x will be y when t(y) = 1. But there is no finite solution for y on the line through a and b. Nevertheless, lim y→∞t(y)=1, thus motivating inclusion of a point at infinity in the projective line. This point at infinity serves as the harmonic conjugate of the midpoint x. From complete quadrangle: Another approach to the harmonic conjugate is through the concept of a complete quadrangle such as KLMN in the above diagram. Based on four points, the complete quadrangle has pairs of opposite sides and diagonals. In the expression of harmonic conjugates by H. S. M. Coxeter, the diagonals are considered a pair of opposite sides: D is the harmonic conjugate of C with respect to A and B, which means that there is a quadrangle IJKL such that one pair of opposite sides intersect at A, and a second pair at B, while the third pair meet AB at C and D.It was Karl von Staudt that first used the harmonic conjugate as the basis for projective geometry independent of metric considerations: ...Staudt succeeded in freeing projective geometry from elementary geometry. In his Geometrie der Lage, Staudt introduced a harmonic quadruple of elements independently of the concept of the cross ratio following a purely projective route, using a complete quadrangle or quadrilateral.To see the complete quadrangle applied to obtaining the midpoint, consider the following passage from J. W. Young: If two arbitrary lines AQ, AS are drawn through A and lines BS, BQ are drawn through B parallel to AQ, AS respectively, the lines AQ, SB meet, by definition, in a point R at infinity, while AS, QB meet by definition in a point P at infinity. The complete quadrilateral PQRS then has two diagonal points at A and B, while the remaining pair of opposite sides pass through M and the point at infinity on AB. The point M is then by construction the harmonic conjugate of the point at infinity on AB with respect to A and B. On the other hand, that M is the midpoint of the segment AB follows from the familiar proposition that the diagonals of a parallelogram (PQRS) bisect each other. From complete quadrangle: Quaternary relations Four ordered points on a projective range are called harmonic points when there is a tetrastigm in the plane such that the first and third are codots and the other two points are on the connectors of the third codot.If p is a point not on a straight with harmonic points, the joins of p with the points are harmonic straights. Similarly, if the axis of a pencil of planes is skew to a straight with harmonic points, the planes on the points are harmonic planes.A set of four in such a relation has been called a harmonic quadruple. Projective conics: A conic in the projective plane is a curve C that has the following property: If P is a point not on C, and if a variable line through P meets C at points A and B, then the variable harmonic conjugate of P with respect to A and B traces out a line. The point P is called the pole of that line of harmonic conjugates, and this line is called the polar line of P with respect to the conic. See the article Pole and polar for more details. Projective conics: Inversive geometry In the case where the conic is a circle, on the extended diameters of the circle, harmonic conjugates with respect to the circle are inverses in a circle. This fact follows from one of Smogorzhevsky's theorems: If circles k and q are mutually orthogonal, then a straight line passing through the center of k and intersecting q, does so at points symmetrical with respect to k.That is, if the line is an extended diameter of k, then the intersections with q are harmonic conjugates. Galois tetrads: In Galois geometry over a Galois field GF(q) a line has q + 1 points, where ∞ = (1,0). In this line four points form a harmonic tetrad when two harmonically separate the others. The condition equivalently 2(cd+ab)=(c+d)(a+b), characterizes harmonic tetrads. Attention to these tetrads led Jean Dieudonné to his delineation of some accidental isomorphisms of the projective linear groups PGL(2, q) for q = 5, 7, 9.If q = 2n, and given A and B, then the harmonic conjugate of C is itself. Iterated projective harmonic conjugates and the golden ratio: Let P0, P1, P2 be three different points on the real projective line. Consider the infinite sequence of points Pn, where Pn is the harmonic conjugate of Pn-3 with respect to Pn-1, Pn-2 for n > 2. This sequence is convergent.For a finite limit P we have lim n→∞Pn+1PPnP=Φ−2=−Φ−2=−3−52, where Φ=12(1+5) is the golden ratio, i.e. Pn+1P≈−Φ−2PnP for large n. Iterated projective harmonic conjugates and the golden ratio: For an infinite limit we have lim n→∞Pn+2Pn+1Pn+1Pn=−1−Φ=−Φ2. For a proof consider the projective isomorphism f(z)=az+bcz+d with f((−1)nΦ2n)=Pn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antineoplastic resistance** Antineoplastic resistance: Antineoplastic resistance, often used interchangeably with chemotherapy resistance, is the resistance of neoplastic (cancerous) cells, or the ability of cancer cells to survive and grow despite anti-cancer therapies. In some cases, cancers can evolve resistance to multiple drugs, called multiple drug resistance. There are two general causes of antineoplastic therapy failure: Inherent genetic characteristics, giving cancer cells their resistance and acquired resistance after drug exposure, which is rooted in the concept of cancer cell heterogeneity. Characteristics of resistant cells include altered membrane transport, enhanced DNA repair, apoptotic pathway defects, alteration of target molecules, protein and pathway mechanisms, such as enzymatic deactivation. Since cancer is a genetic disease, two genomic events underlie acquired drug resistance: Genome alterations (e.g. gene amplification and deletion) and epigenetic modifications. Cancer cells are constantly using a variety of tools, involving genes, proteins, and altered pathways, to ensure their survival against antineoplastic drugs. Definition: Antineoplastic resistance, synonymous with chemotherapy resistance, is the ability of cancer cells to survive and grow despite different anti-cancer therapies, i.e. their multiple drug resistance. There are two general causes of antineoplastic therapy failure: Inherent resistance, such as genetic characteristics, giving cancer cells their resistance from the beginning, which is rooted in the concept of cancer cell heterogeneity and acquired resistance after drug exposure. Cancer cell heterogeneity: Cancer cell heterogeneity, or tumour heterogeneity, is the idea that tumours are made up of different populations of cancer cells that are morphologically, phenotypically and functionally different. Certain populations of cancer cells may possess inherent characteristics, such as genetic mutations and/or epigenetic changes, that confer drug resistance. Antineoplastic drugs kill susceptible sub-populations and the tumour mass may shrink as an initial response to the drug, resistant cancer cells will survive treatment, be selected and then propagate, eventually causing a cancer relapse.Cancer cell heterogeneity can cause disease progression when molecularly targeted therapy, fails to kill those tumor cells which do not express the marker, then divide and mutate further, creating a new heterogeneous tumour. In breast cancer models of the mouse the immune microenvironment affects susceptibility to neoadjuvant chemotherapy. In breast cancer, particularly in the triple-negative subtype, immune checkpoint blockade has been used successfully in metastatic cases and neoadjuvant therapy. Acquired resistance: Since cancer is a genetic disease, two genomic events underlie these mechanisms of acquired drug resistance: Genome alterations (e.g. gene amplification and deletion) and epigenetic modifications. Genetic causes Genome alterations Chromosomal rearrangement due to genome instability can cause gene amplification and deletion. Acquired resistance: Gene amplification is the increase in copy number of a region of a chromosome. which occur frequently in solid tumors, and can contribute to tumor evolution through altered gene expression.Hamster cell research in 1993 showed that amplifications in the DHFR gene involved in DNA synthesis began with chromosome break in below the gene, and subsequent cycles of bridge-breakage-fusion formations result in large intrachromosomal repeats. The over amplification of oncogenes can occur in response to chemotherapy, thought to be the underlying mechanism in several classes of resistance. For example, DHFR amplification occurs in response to methotrexate, TYMS (involved in DNA synthesis) amplification occurs in response to 5-fluorouracil, and BCR-ABL amplification occurs in response to imatinib mesylate. Determining areas of gene amplification in cells from cancer patients has huge clinical implications. Acquired resistance: Gene deletion is the opposite of gene amplification, where a region of a chromosome is lost and drug resistance occurs by losing tumor suppressor genes such as TP53.Genomic instability can occur when the replication fork is disturbed or stalled in its migration. This can occur with replication fork barriers, proteins such as PTIP, CHD4 and PARP1, which are normally cleared by the cell's DNA damage sensors, surveyors, and responders BRCA1 and BRCA2. Acquired resistance: Epigenetic mechanisms Epigenetic modifications in antineoplastic drug resistance play a major role in cancer development and drug resistance as they contribute to the regulation of gene expression. Two main types of epigenetic control are DNA methylation and histone methylation/acetylation. DNA methylation is the process of adding methyl groups to DNA, usually in the upstream promoter regions, which stops DNA transcription at the region and effectively silences individual genes. Histone modifications, such as deacetylation, alters chromatin formation and silence large chromosomal regions. In cancer cells, where normal regulation of gene expression breaks down, the oncogenes are activated via hypomethylation and tumor suppressors are silenced via hypermethylation. Similarly, in drug resistance development, it has been suggested that epigenetic modifications can result in the activation and overexpression of pro-drug resistance genes.Studies on cancer cell lines have shown that hypomethylation (loss of methylation) of the MDR1 gene promoter caused overexpression and the multidrug resistance.In a methotrexate resistant breast cancer cell lines without drug uptake and folate carrier expression, giving DAC, a DNA methylation inhibitor, improved drug uptake and folate carrier expression.Acquired resistance to the alkylating drug fotemustine in melanoma cell showed high MGMT activity related to the hypermethylation of the MGMT gene exons.In Imatinib resistant cell lines, silencing of the SOCS-3 gene via methylation has been shown to cause STAT3 protein activation, which caused uncontrolled proliferation. Acquired resistance: Cancer cell mechanisms Cancer cells can become resistant to multiple drugs by altered membrane transport, enhanced DNA repair, apoptotic pathway defects, alteration of target molecules, protein and pathway mechanisms, such as enzymatic deactivation. Acquired resistance: Altered membrane transport Many classes of antineoplastic drugs act on intracellular components and pathways, like DNA, nuclear components, meaning that they need to enter the cancer cells. The p-glycoprotein (P-gp), or the multiple drug resistance protein, is a phosphorylated and glycosylated membrane transporter that can shuttle drugs out of the cell, thereby decreasing or ablating drug efficacy. This transporter protein is encoded by the MDR1 gene and is also called the ATP-binding cassette (ABC) protein. MDR1 has promiscuous substrate specificity, allowing it to transport many structurally diverse compounds across the cell membrane, mainly hydrophobic compounds. Studies have found that the MDR1 gene can be activated and overexpressed in response to pharmaceutical drugs, thus forming the basis for resistance to many drugs. Overexpression of the MDR1 gene in cancer cells is used to keep intracellular levels of antineoplastic drugs below cell-killing levels.For example, the antibiotic rifampicin has been found to induce MDR1 expression. Experiments in different drug resistant cell lines and patient DNA revealed gene rearrangements which had initiated the activation or overexpression of MDR1. A C3435T polymorphism in exon 226 of MDR1 has also been strongly correlated with p-glycoprotein activities.MDR1 is activated through NF-κB, a protein complex which acts as a transcription factor. In the rat, an NF-κB binding site is adjacent to the mdr1b gene, NF-κB can be active in tumour cells because its mutated NF-κB gene or its inhibitory IκB gene mutated under chemotherapy. In colorectal cancer cells, inhibition of NF-κB or MDR1 caused increased apoptosis in response to a chemotherapeutic agent. Acquired resistance: Enhanced DNA repair Enhanced DNA repair plays an important role in the ability for cancer cells to overcome drug-induced DNA damages. Acquired resistance: Platinum-based chemotherapies, such as cisplatin, target tumour cells by cross-linking their DNA strands, causing mutation and damage. Such damage will trigger programmed cell death (e.g. apoptosis) in cancer cells. Cisplatin resistance occurs when cancer cells develop an enhanced ability to reverse such damage by removing the cisplatin from DNA and repairing any damage done. The cisplatin-resistant cells upregulate expression of the excision repair cross-complementing (ERCC1) gene and protein.Some chemotherapies are alkylating agents meaning they attach an alkyl group to DNA to stop it from being read. O6-methylguanine DNA methyltransferase (MGMT) is a DNA repair enzyme which removes alkyl groups from DNA. MGMT expression is upregulated in many cancer cells, which protects them from alkylating agents. Increased MGMT expression has been found in colon cancer, lung cancer, non-Hodgkin's lymphoma, breast cancer, gliomas, myeloma and pancreatic cancer. Acquired resistance: Apoptotic pathway defects TP53 is a tumor suppressor gene encoding the p53 protein, which responds to DNA damage either by DNA repair, cell cycle arrest, or apoptosis. Losing TP53 via gene deletion can allow cells to continuously replicate despite DNA damage. The tolerance of DNA damage can grant cancer cells a method of resistance to those drugs which normally induce apoptosis through DNA damage.Other genes involved in the apoptotic pathway related drug resistance include h-ras and bcl-2/bax. Oncogenic h-ras has been found to increase expression of ERCC1, resulting in enhanced DNA repair (see above). Inhibition of h-ras was found to increase cisplatin sensitivity in glioblastoma cells. Upregulated expression of Bcl-2 in leukemic cells (non-Hodgkin's lymphoma) resulted in decreased levels of apoptosis in response to chemotherapeutic agents, as Bcl-2 is a pro-survival oncogene. Acquired resistance: Altered target molecules During targeted therapy, oftentimes the target has modified itself and decreased its expression to the point that therapy is no longer effective. One example of this is the loss of estrogen receptor (ER) and progesterone receptor (PR) upon anti-estrogen treatment of breast cancer. Tumors with loss of ER and PR no longer respond to tamoxifen or other anti-estrogen treatments, and while cancer cells remain somewhat responsive to estrogen synthesis inhibitors, they eventually become unresponsive to endocrine manipulation and no longer dependent on estrogen for growth.Another line of therapeutics used for treating breast cancer is targeting of kinases like human epidermal growth factor receptor 2 (HER2) from the EGFR family. Mutations often occur in the HER2 gene upon treatment with an inhibitor, with about 50% of patients with lung cancer found to have an EGFR-T790M gatekeeper mutation.Treatment of chronic myeloid leukemia (CML) involves a tyrosine kinase inhibitor that targets the BCR/ABL fusion gene called imatinib. In some people resistant to Imatinib, the BCR/ABL gene is reactivated or amplified, or a single point mutation has occurred on the gene. These point mutations enhance autophosphorylation of the BCR-ABL protein, resulting in the stabilization of the ATP-binding site into its active form, which cannot be bound by imatinib for proper drug activation.Topoisomerase is a lucrative target for cancer therapy due to its critical role as an enzyme in DNA replication, and many topoisomerase inhibitors have been made. Resistance can occur when topoisomerase levels are decreased, or when different isoforms of topoisomerase are differentially distributed within the cell. Mutant enzymes have also been reported in patient leukemic cells, as well as mutations in other cancers that confer resistance to topoisomerase inhibitors. Acquired resistance: Altered metabolism One of the mechanisms of antineoplastic resistance is over-expression of drug-metabolizing enzymes or carrier molecules. By increasing expression of metabolic enzymes, drugs are more rapidly converted to drug conjugates or inactive forms that can then be excreted. For example, increased expression of glutathione promotes drug resistance, as the electrophilic properties of glutathione allow it to react with cytotoxic agents, inactivating them. In some cases, decreased expression or loss of expression of drug-metabolising enzymes confers resistance, as the enzymes are needed to process a drug from an inactive form to an active form. Arabinoside, a commonly used chemotherapy for leukemia and lymphomas, is converted into cytosine arabinoside triphosphate by deoxycytidine kinase. Mutation of deoxycytidine kinase or loss of expression results in resistance to arabinoside. This is a form of enzymatic deactivation.Growth factor expression levels can also promote resistance to antineoplastic therapies. In breast cancer, drug resistant cells were found to express high levels of IL-6, while sensitive cells did not express significant levels of the growth factor. IL-6 activates the CCAAT enhancer-binding protein transcription factors which activate MDR1 gene expression (see Alteration of Membrane Transport). Genetic markers for drug sensitivity and resistance: Pharmacogenetics play an increasingly important role in antineoplastic treatment. Rapid sequencing technologies can identify genetic markers for treatment sensitivity and potential resistance. Certain markers are more representative and more likely to be used clinically.When BRCA1 and BRCA2 are missing, as in 5 percent to 10 percent of all breast cancers, a stalled fork remains destabilized and its newly synthesized DNA is degraded. This genomic instability means the cancer cell is actually more sensitive to DNA-damaging chemotherapy drugs. Genetic approaches to overcome drug resistance: MDR proteins are known to be drug-resistance genes, and are highly expressed in various cancers. Inhibition of the MDR genes could result in sensitization of cells to therapeutics and a decrease in antineoplastic resistance. Reversin 121 (R121) is a high-affinity peptide for MDR, and use of R121 as a treatment for pancreatic cancer cells results in increased chemosensitivity and decreased proliferation.Aberrant NF-κB expression is found in many cancers, and NF-κB has been found to be involved in resistance to platinum-based chemotherapies, such as cisplatin. NF-κB inhibition by genistein in various cancer cell lines (prostate, breast, lung and pancreas) showed increased growth inhibition and an increase in chemosensitivity, seen as an increase in apoptosis induced by therapeutic agents. However, targeting the NF-κB pathway can be difficult, as there can be many off-target and non-specific effects. Genetic approaches to overcome drug resistance: Expression of mutated TP53 causes defects in the apoptotic pathway, allowing cancerous cells to avoid death. Re-expression of the wild-type gene in cancer cells in vitro has been shown to inhibit cell proliferation, induce cell cycle arrest and apoptosis.In ovarian cancer, the ATP7B gene encodes for a copper efflux transporter, found to be upregulated in cisplatin-resistant cell lines and tumors. Development of antisense deoxynucleotides against ATP7B mRNA and treatment of an ovarian cancer cell line shows that inhibition of ATP7B increases sensitivity of the cells to cisplatin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetraethylammonium chloride** Tetraethylammonium chloride: Tetraethylammonium chloride (TEAC) is a quaternary ammonium compound with the chemical formula (C2H5)4N+Cl−, sometimes written as Et4N+Cl−. In appearance, it is a hygroscopic, colorless, crystalline solid. It has been used as the source of tetraethylammonium ions in pharmacological and physiological studies, but is also used in organic chemical synthesis. Preparation and structure: TEAC is produced by alkylation of triethylamine with ethyl chloride.TEAC exists as either of two stable hydrates, the monohydrate and tetrahydrate. The crystal structure of TEAC.H2O has been determined, as has that of the tetrahydrate, TEAC.4H2O.Details for the preparation of large, prismatic crystals of TEAC.H2O are given by Harmon and Gabriele, who carried out IR-spectroscopic studies on this and related compounds. These researchers have also pointed out that, although freshly-purified TEAC.H2O is free of triethylamine hydrochloride, small quantities of this compound form on heating of TEAC as the result of a Hofmann elimination: Cl− + H-CH2-CH2-N+Et3 → Cl-H + H2C=CH2 + Et3N Synthetic Applications: To a large extent, the synthetic applications of TEAC resemble those of tetraethylammonium bromide (TEAB) and tetraethylammonium iodide (TEAI), although one of the salts may be more efficacious than another in a particular reaction. For example, TEAC produces better yields than TEAB or TEAI as a co-catalyst in a reaction to prepare diarylureas from arylamines, nitroaromatics and carbon monoxide.In other examples, such as the following, TEAC is not as effective as TEAB or TEAI: 2-Hydroxyethylation (attachment of -CH2-CH2-OH) by ethylene carbonate of carboxylic acids and certain heterocycles bearing an acidic N-H. Synthetic Applications: Phase-transfer catalyst in geminal di-alkylation of fluorene, N,N-dialkylation of aniline and N-alkylation of carbazole using aqueous sodium hydroxide and alkyl halides. Biology: In common with tetraethylammonium bromide and tetraethylammonium iodide, TEAC has been used as a source of tetraethylammonium ions for numerous clinical and pharmacological studies, which are covered in more detail under the entry for Tetraethylammonium. Briefly, TEAC has been explored clinically for its ganglionic blocking properties, although it is now essentially obsolete as a drug, and it is still used in physiological research for its ability to block K+ channels in various tissues. Toxicity: The toxicity of TEAC is primarily due to the tetraethylammonium ion, which has been studied extensively. The acute toxicity of TEAC is comparable to that of tetraethylammonium bromide and tetraethylammonium iodide. These data are provided for comparative purposes; additional details may be found in the entry for Tetraethylammonium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cheap talk** Cheap talk: In game theory, cheap talk is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signaling in which sending certain messages may be costly for the sender depending on the state of the world. This basic setting set by Vincent Crawford and Joel Sobel has given rise to a variety of variants. To give a formal definition, cheap talk is communication that is: costless to transmit and receive non-binding (i.e. does not limit strategic choices by either party) unverifiable (i.e. cannot be verified by a third party like a court)Therefore, an agent engaging in cheap talk could lie with impunity, but may choose in equilibrium not to do so. Applications: Game theory Cheap talk can, in general, be added to any game and has the potential to enhance the set of possible equilibrium outcomes. For example, one can add a round of cheap talk in the beginning of the Battle of the Sexes. Each player announces whether they intend to go to the football game, or the opera. Because the Battle of the Sexes is a coordination game, this initial round of communication may enable the players to select among multiple equilibria, thereby achieving higher payoffs than in the uncoordinated case. The messages and strategies which yield this outcome are symmetric for each player. They are: 1) announce opera or football with even probability 2) if a person announces opera (or football), then upon hearing this message the other person will say opera (or football) as well (Farrell and Rabin, 1996). If they both announce different options, then no coordination is achieved. In the case of only one player messaging, this could also give that player a first-mover advantage. Applications: It is not guaranteed, however, that cheap talk will have an effect on equilibrium payoffs. Another game, the Prisoner's Dilemma, is a game whose only equilibrium is in dominant strategies. Any pre-play cheap talk will be ignored and players will play their dominant strategies (Defect, Defect) regardless of the messages sent. Applications: Biological applications It has been commonly argued that cheap talk will have no effect on the underlying structure of the game. In biology authors have often argued that costly signalling best explains signalling between animals (see Handicap principle, Signalling theory). This general belief has been receiving some challenges (see work by Carl Bergstrom and Brian Skyrms 2002, 2004). In particular, several models using evolutionary game theory indicate that cheap talk can have effects on the evolutionary dynamics of particular games. Crawford and Sobel's definition: Setting In the basic form of the game, there are two players communicating, one sender S and one receiver R. Type Sender S gets knowledge of the state of the world or of his "type" t. Receiver R does not know t ; he has only ex-ante beliefs about it, and relies on a message from S to possibly improve the accuracy of his beliefs. Message S decides to send message m. Message m may disclose full information, but it may also give limited, blurred information: it will typically say "The state of the world is between t1 and t2". It may give no information at all. The form of the message does not matter, as long as there is mutual understanding, common interpretation. It could be a general statement from a central bank's chairman, a political speech in any language, etc. Whatever the form, it is eventually taken to mean "The state of the world is between t1 and t2". Action Receiver R receives message m. R updates his beliefs about the state of the world given new information that he might get, using Bayes's rule. R decides to take action a. This action impacts both his own utility and the sender's utility. Utility The decision of S regarding the content of m is based on maximizing his utility, given what he expects R to do. Utility is a way to quantify satisfaction or wishes. It can be financial profits, or non-financial satisfaction—for instance the extent to which the environment is protected. Crawford and Sobel's definition: → Quadratic utilities: The respective utilities of S and R can be specified by the following: The theory applies to more general forms of utility, but quadratic preferences makes exposition easier. Thus S and R have different objectives if b ≠ 0. Parameter b is interpreted as conflict of interest between the two players, or alternatively as bias.UR is maximized when a = t, meaning that the receiver wants to take action that matches the state of the world, which he does not know in general. US is maximized when a = t + b, meaning that S wants a slightly higher action to be taken, if b > 0. Since S does not control action, S must obtain the desired action by choosing what information to reveal. Each player’s utility depends on the state of the world and on both players’ decisions that eventually lead to action a. Crawford and Sobel's definition: Nash equilibrium We look for an equilibrium where each player decides optimally, assuming that the other player also decides optimally. Players are rational, although R has only limited information. Expectations get realized, and there is no incentive to deviate from this situation. Theorem Crawford and Sobel characterize possible Nash equilibria. There are typically multiple equilibria, but in a finite number. Separating, which means full information revelation, is not a Nash equilibrium. Crawford and Sobel's definition: Babbling, which means no information transmitted, is always an equilibrium outcome.When interests are aligned, then information is fully disclosed. When conflict of interest is very large, all information is kept hidden. These are extreme cases. The model allowing for more subtle case when interests are close, but different and in these cases optimal behavior leads to some but not all information being disclosed, leading to various kinds of carefully worded sentences that we may observe. Crawford and Sobel's definition: More generally: There exists N* > 0 such that for all N with 1 ≤ N ≤ N*, there exists at least an equilibrium in which the set of induced actions has cardinality N; and moreover there is no equilibrium that induces more than N* actions. Messages While messages could ex-ante assume an infinite number of possible values µ(t) for the infinite number of possible states of the world t, actually they may take only a finite number of values (m1, m2, . . . , mN). Thus an equilibrium may be characterized by a partition (t0(N), t1(N). . . tN(N)) of the set of types [0, 1], where 0 = t0(N) < t1(N) < . . . < tN(N) = 1. This partition is shown on the top right segment of Figure 1. The ti(N)’s are the bounds of intervals where the messages are constant: for ti-1(N) < t < ti(N), µ(t) = mi. Actions Since actions are functions of messages, actions are also constant over these intervals: for ti-1(N) < t < ti(N), α(t) = α(mi) = ai. Crawford and Sobel's definition: The action function is now indirectly characterized by the fact that each value ai optimizes return for the R, knowing that t is between t1 and t2. Mathematically (assuming that t is uniformly distributed over [0, 1]), max a∫ti−1tiUR(a,t)dt → Quadratic utilities: Given that R knows that t is between ti-1 and ti, and in the special case quadratic utility where R wants action a to be as close to t as possible, we can show that quite intuitively the optimal action is the middle of the interval: Indifference condition What happens at t = ti? The sender has to be indifferent between sending either message mi-1 or mi. Crawford and Sobel's definition: US(ai,ti)=US(ai+1,ti) 1 ≤ i≤ N-1 This gives information about N and the ti. → Practically: We consider a partition of size N. Crawford and Sobel's definition: One can show that N must be small enough so that the numerator is positive. This determines the maximum allowed value where ⟨Z⟩ is the ceiling of Z , i.e. the smallest positive integer greater or equal to Z . Example: We assume that b = 1/20. Then N* = 3. We now describe all the equilibria for N=1, 2, or 3 (see Figure 2). Crawford and Sobel's definition: N = 1: This is the babbling equilibrium. t0 = 0, t1 = 1; a1 = 1/2 = 0.5. N = 2: t0 = 0, t1 = 2/5 = 0.4, t2 = 1; a1 = 1/5 = 0.2, a2 = 7/10 = 0.7. N = N* = 3: t0 = 0, t1 = 2/15, t2 = 7/15, t3 = 1; a1 = 1/15, a2 = 3/10 = 0.3, a3 = 11/15. With N = 1, we get the coarsest possible message, which does not give any information. So everything is red on the top left panel. With N = 3, the message is finer. However, it remains quite coarse compared to full revelation, which would be the 45° line, but which is not a Nash equilibrium. With a higher N, and a finer message, the blue area is more important. This implies higher utility. Disclosing more information benefits both parties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2,4-Butanetriol trinitrate** 1,2,4-Butanetriol trinitrate: 1,2,4-Butanetriol trinitrate (BTTN), also called butanetriol trinitrate, is an important military propellant. It is a colorless to brown explosive liquid.BTTN is used as a propellant in virtually all single-stage missiles used by the United States, including the Hellfire. It is less volatile, less sensitive to shock, and more thermally stable than nitroglycerine, for which it is a promising replacement.BTTN as a propellant is often used in a mixture with nitroglycerin. The mixture can be made by co-nitration of butanetriol and glycerol. BTTN is also used as a plasticizer in some nitrocellulose-based propellants.BTTN is manufactured by nitration of 1,2,4-butanetriol. Biotechnological manufacture of butanetriol is under intensive research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diphenylsilanediol** Diphenylsilanediol: Diphenylsilanediol, Ph2Si(OH)2, is a silanol. The tetrahedral molecule forms hydrogen-bonded columns in the solid state. It can be prepared by hydrolysis of diphenyldichlorosilane Ph2SiCl2. Diphenylsilanediol can act as an anticonvulsant, in a similar way to phenytoin. Although the compound is stable in normal conditions, the presence of basic impurities can accelerate the condensation of the silanol groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kip (artistic gymnastics)** Kip (artistic gymnastics): In artistic gymnastics, a kip is a technique that involves flexing or piking at the hips, and then rapidly extending the hip joints to impart momentum. It may be performed in some form on all apparatuses, but is most commonly performed on the women's uneven bars and on the men's rings, parallel bars, and horizontal bar.The kip is an important technique that is used as both a mount and an element or connecting technique in a bar routine. The kip allows the gymnast to swing below the bar to arrive in a front support on the bar. From the front support, the gymnast may then perform any number of skills. The glide kip is the most commonly used mount on the women's uneven bars. Kip (artistic gymnastics): The kip has been used since the early days of modern gymnastics. Currently, in the USA, the kip first appears in the women's USAG Level 4. Variations: Variations of the kip include the long hang kip, glide kip, drop kip, kip with stoop through and kip with jump turn. Variations: The glide kip may be used as a mount or a connecting skill. The glide kip is performed by swinging in a piked hollow position, toes in front, under the bar to extend to a straight hollow position parallel to the floor. Once extended, the gymnast quickly pulls his or her feet towards the bar while simultaneously pushing down with straight arms to arrive in a straight hollow front support on the bar. Taller gymnasts may perform the glide kip in a straddle hollow rather than piked hollow swing, joining the feet together at extension, before initiating the kipping action. Variations: The long hang kip utilizes the gymnast's ability to swing on the high bar to perform a kip. Unlike a glide kip, the long hang kip does not swing in a piked hollow position nor does it swing all the way out to horizontal. A long hang kip begins with a straight hollow swing under the bar and performs the kipping action (the pulling of the feet to the bar while pushing with the arms) much earlier in the swing than does a glide kip. Variations: The drop kip is a kip often used in gymnastics conditioning, both to strengthen the muscles of those who already are able to perform a kip as well as to help new students learn a kip. This kip utilizes little swing, rather, it requires a great deal of strength on the part of the gymnast. The gymnast begins in a front support on the bar, thighs on the bar, chest in hollow. He or she then drops backwards with straight arms, "sliding" the bar down their legs to their feet. Their shins or feet must stay close to the bar. After completing their small swing backwards, they begin the kipping action by pushing down with straight arms to arrive again in a hollow front support on the bar. Variations: The glide kip with stoop through is performed by executing a glide kip, but bringing the feet and straight legs between the bar and the arms to finish sitting on top of the bar. The gymnast typically will then use the propulsion of the kip to push off the low bar and catch the high bar. This skill may also be performed by straddling the legs over the bar rather than piking between the hands. Variations: The jump with 1/2 turn to kip and jump with 1/1 turn to kip is a kip performed with a jump half or jump full turn prior to catching the bar. When a jump half turn is performed, the gymnast does not swing but immediately kips from the air. When a jump full turn is performed, the gymnast completes the swing as usual before kipping. This skill may be performed on the low or the high bar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Escape response** Escape response: Escape response, escape reaction, or escape behavior is a mechanism by which animals avoid potential predation. It consists of a rapid sequence of movements, or lack of movement, that position the animal in such a way that allows it to hide, freeze, or flee from the supposed predator. Often, an animal's escape response is representative of an instinctual defensive mechanism, though there is evidence that these escape responses may be learned or influenced by experience.The classical escape response follows this generalized, conceptual timeline: threat detection, escape initiation, escape execution, and escape termination or conclusion. Threat detection notifies an animal to a potential predator or otherwise dangerous stimulus, which provokes escape initiation, through neural reflexes or more coordinated cognitive processes. Escape execution refers to the movement or series of movements that will hide the animal from the threat or will allow for the animal to flee. Once the animal has effectively avoided the predator or threat, the escape response is terminated. Upon completion of the escape behavior or response, the animal may integrate the experience with its memory, allowing it to learn and adapt its escape response.Escape responses are anti-predator behaviour that can vary from species to species. The behaviors themselves differ depending upon the species, but may include camouflaging techniques, freezing, or some form of fleeing (jumping, flying, withdrawal, etc.). In fact, variation between individuals is linked to increased survival. In addition, it is not merely increased speed that contributes to the success of the escape response; other factors, including reaction time and the individual's context can play a role. The individual escape response of a particular animal can vary based on an animal's previous experiences and its current state. Evolutionary importance: The ability to perform an effective escape maneuver directly affects the fitness of the animal, because the ability to evade predation enhances an animal's chance of survival. Those animals that learn to or are simply able to avoid predators have contributed to the wide variety of escape responses seen today. Animals that are able to adapt their responses in ways different from their own species have displayed increased rates of survival. Because of this, it is common for the individual escape response of an animal to vary according to reaction time, environmental conditions, and/or past and present experience.Arjun et al. (2017) found that it is not necessarily the speed of the response itself, but the greater distance between the targeted individual and the predator when the response is executed. In addition, the escape response of an individual is directly related to the threat of the predator. Predators that pose the biggest risk to the population will evoke the greatest escape response. Therefore, it may be an adaptive trait selected for by natural selection. Evolutionary importance: Law & Blake (1996) argue that many morphological characteristics could contribute to an individual's efficient escape response, but the escape response has undoubtedly been molded by evolution. In their study, they compared more recent sticklebacks to their ancestral form, the Paxton Lake stickleback, and found that the performance of the ancestral form was significantly lower. Therefore, one may conclude that this response has been ripened by evolution. Neurobiology: How the escape responses are initiated neurologically, and how the movements are coordinated is dependent on the species. The behaviors alone vary widely, so, in a similar manner, the neurobiology of the response can be highly variable between species.'Simple' escape responses are commonly reflex movements that will quickly move the animal away from the potential threat. These neural circuits operate quickly and effectively, rapidly taking in sensory stimuli and initiating the escape behavior through well-defined neuron systems.Complex escape responses often require a mixture of cognitive processes. This may stem from a difficult environment to escape from, or the animal having multiple potential escape methods. Initially, the animal must recognize the threat of predation, but following the initial recognition the animal might have to quickly determine the best route of escape, based on prior experience. This means rapid integration of incoming information with prior knowledge, and then coordination of motor movements deemed necessary. Complex escape responses generally require a more robust neural network.Researchers will often evoke an escape response to test the potency of hormones and/or medication and their relationship to stress. As such, the escape response is fundamental to anatomical and pharmacological research. Role of learning: Habituation A series of initially threatening encounters that do not lead to any true adverse outcomes for the animal can drive the development of habituation. Habituation is an adaptation strategy that refers to the diminishing response of an animal to a stimulus following repetitive exposures of the animal to that same stimulus. In other words, the animal learns to distinguish between innately threatening situations and may choose to not go through with their escape response. This is a highly variable phenomenon, where the stimulus itself is highly specific, and the experience is highly context dependent. This suggests that there is no one mechanism by which a species will develop habituation to a stimulus, instead habituation may arise from the integration of experiences. A number of cognitive processes may operate during one single threatening experience, but the levels at which these processes are integrated will determine how the individual animal will potentially respond next.Caenorhabditis elegans, commonly identified as nematodes, have been used as a model species for studies observing their characteristic "tap-withdrawal response". The tapping on serves as the fear-provoking, mechanical stimulus which C. elegans worms will move away from. If the tapping stimulus continues without any direct effects on the worms, they will gradually stop responding to the stimulus. This response is modulated by a series of mechanosensory neurons (AVM, ALM, PVD, and PLM) which synapse with interneurons (AVD, AVA, AVB, and PVC) transmitting the signal to motor neurons that cause the back-and-forth movements. Habituation to the tapping reduces activity of the initial mechanosensory neurons, seen as decrease in calcium channel activity and neurotransmitter release.The primary force driving escape habituation is suspected to be energy conservation. If an animal learns that a certain threat will not actively cause harm to it, then the animal can choose to minimize its energy costs by not performing its escape. For example, Zebra danios, also known as Zebrafish, who are habituated to predators are more latent to flee than those who were not habituated to predators. However, habituation did not affect the fish's angle of escape from the predator. Role of learning: Learned helplessness If an animal cannot react via a startle or avoidance response, they will develop learned helplessness as a result of receiving or perceiving repeated threatening stimuli and believing the stimuli is unavoidable. The animal will submit and not react, even if the stimuli previously triggered instinctual responses or if the animal is provided an escape opportunity. In these situations, escape responses are not used because the animal has almost forgotten their innate response systems.Helplessness is learned through habituation, because the brain is programmed to believe control is not present. In essence, animals operate under the assumption they have the free will to fight, flee or freeze as well as engage in other behaviors. When escape responses fail, they develop helplessness. Role of learning: A common, theoretical example of learned helplessness is an elephant, trained by humans who condition the elephant to believe it cannot escape punishment. As a young elephant, it would be chained down with a pick to keep it from leaving. As it grows, the elephant would have the ability to easily overpower the tiny pick. Development of learned helplessness keeps the elephant from doing so, believing that it is trapped and the effort is futile. Role of learning: In a more natural setting, learned helplessness would most often be displayed by animals that live in group settings. If food were scarce and one individual was always overpowered when it came time to get food, it would soon believe that no matter what it did, getting food would be impossible. It would have to find food on its own or submit to the idea it will not eat. Startle response: Startle response is an unconscious response to sudden or threatening stimuli. In the wild, common examples would be sharp noises or quick movements. Because these stimuli are so harsh they are connected to a negative effect. This reflex causes a change in body posture, emotional state, or a mental shift to prepare for a specific motor task.A common example would be cats and how, when startled, their arrector pili muscles contract, making the hair stand up and increase their apparent size. Another example would be excessive blinking due to the contraction of the orbicularis oculi muscle when an object is rapidly moving toward an animal; this is often seen in humans. Startle response: Halichoerus grypus, or Grey seals, respond to acoustic startle stimuli by fleeing from the noise. The acoustic startle reflex is only activated when the noise is over eighty decibels, which promotes stress and anxiety responses that encourage flight. Startle response: Flight zone Flight zone and flight distance are interchangeable and refer to the distance needed to keep an animal under the threshold that would trigger a startle response.A flight zone can be circumstantial, because a threat can vary in size (individually or in group number). Overall, this distance is the measure of an animal's willingness to take on risks. This differentiates a flight zone from personal distance an animal prefers and social distance (how close other species are willing to be).An applicable analogy would be a reactive dog. When the flight zone is large, the dog will maintain an observant stance, but a startle response will not occur. As the threatening stimuli moves forward and decreases the flight zone, the dog will exhibit behaviors that fall into a startle or avoidance response. Avoidance response: The avoidance response is a form of negative reinforcement which is learned through operant conditioning. This response is usually beneficial, as it reduces risk of injury or death for animals, also because it is an adaptive response and can change as the species evolves. Individuals are able to recognize certain species or environments that need to be avoided, which can allow them to increase the flight distance to ensure safety. Avoidance response: When scared, octopus release ink to distract their predators enough that they can burrow into a safe area. Another example of avoidance is the fast-start response in fish. They are able to relegate musculoskeletal control which allows them to withdraw from the environment with the threatening stimuli. It is believed that the neural circuits have adapted over time to more quickly react to a stimulus. Interestingly, fish that keep to the same groups will be more reactive than those who are not. Examples: In birds Avian species also display unique escape responses. Birds are uniquely vulnerable to human interference in the form of aircraft, drones, cars, and other technology. There has been a lot of interest in how these structures will and do affect the behaviors of terrestrial and aquatic birds. Examples: One study, Weston et al., 2020, observed how flight initiation changed according to the distance of the drone from the birds. It was found that as the drone approached the tendency of birds to take flight to escape it increased dramatically. This was positively affected by the altitude at which the birds were exposed to the drone. In another experiment by Devault et al. (1989), brown-headed cowbirds (Molothrus ater) were exposed to a demonstration of traffic traveling at speeds between 60 – 360 km/h. When approached by a vehicle travelling at 120 km/h, the birds only allotted 0.8s to escape before a possible collision. This study showed that fast traffic speeds may not allow enough time for birds to initiate an escape response. Examples: In fish In fish and amphibians, the escape response appears to be elicited by Mauthner cells, two giant neurons located in the rhombomere 4 of the hindbrain.Generally, when faced with a dangerous stimuli, fish will contract their axial muscle, resulting a C-shaped contraction away from the stimulus. This response occurs in two separate stages: a muscle contraction that allows them to speed away from a stimulus (stage 1), and a sequential contralateral movement (stage 2). This escape is also known as a "fast-start response". The majority of the fish respond to an external stimulus (pressure changes) within 5 to 15 milliseconds, while some will exhibit a slower response taking up to 80 milliseconds. While the escape response generally only propels the fish a small distance away, this distance is long enough to prevent predation. While many predators use water pressure to catch their prey, this short distance prevents them from feeding on the fish via suction.Particularly in the case of fish, it has been hypothesized that the differences in escape response are due to the evolution of neural circuits over time. This can be witnessed by observing the difference in the extent of stage 1 behaviour, and the distinct muscle activity in stage 2 of the C-start or fast-start response.In larval zebrafish (Danio rerio), they sense predators using their lateral line system. When larvae are positioned lateral to a predator, they will escape in a likewise lateral direction. According to game theory, zebrafish who are positioned lateral and ventral to the predator are more likely to survive, rather than any alternate strategy. Finally, the faster (cm/s) the predator is moving, the faster downward the fish will move to escape predation.Recent research in guppies has shown that familiarity can affect the reaction time involved in the escape response. Guppies that were placed in familiar groups were more likely to respond than guppies who were assigned to unfamiliar groups. Wolcott et al. (2017) suggest that familiar groups may lead to reduced inspection and aggression among conspecifics. The theory of limited attention states that the brain has a limited amount of information processing, and, as an individual is engaged in more tasks, the less resources it can provide to one given task. As a result, they have more attention that they can devote toward anti-predator behaviour. Examples: In insects When house flies (Musca domestica) encounter an aversive stimulus, they jump rapidly and fly away from the stimulus. A recent research suggests that the escape response in Musca domestica is controlled by a pair of compound eyes, rather than by the ocelli. When one of the compound eyes was covered, the minimum threshold to elicit an escape response increased. In short, the escape reaction of Musca domestica is evoked by the combination of both motion and light.Cockroaches are also well known for their escape response. When individuals sense a wind puff, they will turn and escape in the opposite direction. The sensory neurons in the paired caudal cerci (singular: cercus) at the rear of the animal send a message along the ventral nerve cord. Then, one of two responses are elicited: running (through the ventral giant interneurons) or flying/running (through the dorsal giant interneurons). Examples: In mammals Mammals can display a wide range of escape responses. Some of the most common escape responses include withdrawal reflexes, fleeing, and, in some instances where outright escape is too difficult, freezing behaviors. Examples: Higher-order mammals often display withdrawal reflexes. Exposure to danger, or a painful stimulus (in nociceptor-mediated loops), initiate a spinal reflex loop. Sensory receptors transmit the signal to the spine where it is rapidly integrated by interneurons and consequently an efferent signal is sent down motor neurons. The effect of the motor neurons is to contract the muscles necessary to pull the body, or body part away from the stimulus.Some mammals, like squirrels and other rodents, have defensive neural networks present in the midbrain that allow for quick adaptation of their defense strategy. If these animals are caught in an area without refuge, they can quickly change their strategy from fleeing to freezing. Freezing behavior allows for the animal to avoid detection by the predator.In one study, Stankowich & Coss (2007) studied the flight initiation distance of Columbian black-tailed deer. According to the authors, the flight initiation distance is the distance between prey and predator when the prey attempts an escape response. They found that the angle, distance, and speed that the deer escaped was related to the distance between the deer and its predator, a human male in this experiment. Examples: Other examples Squids have developed a multitude of anti-predator escape responses, including: jet-driven escape, postural displays, inking and camouflage. Inking and jet-driven escape are arguably the most salient responses, in which the individual squirts ink at the predator as it speeds away. These blobs of ink can vary in size and shape; larger blobs can distract the predator while smaller blobs can provide a cover under which the squid can disappear. Finally, the released ink also contains hormones such as L-dopa and dopamine that can warn other conspecifics of danger while blocking olfactory receptors in the targeted predator. Examples: Cuttlefish (Sepia officinalis) are also well known for their escape responses. Unlike squids, who may engage more salient escape responses, the cuttlefish has few defences so it relies on more conspicuous means: jet-driven escape and freezing behaviour. However, it appears that the majority of cuttlefish use a freezing escape response when avoiding predation. When the cuttlefish freeze, it minimizes the voltage of their bioelectric field, making them less susceptible to their predators, mainly sharks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Voting bloc** Voting bloc: A voting bloc is a group of voters that are strongly motivated by a specific common concern or group of concerns to the point that such specific concerns tend to dominate their voting patterns, causing them to vote together in elections. For example, Beliefnet identifies 12 main religious blocs in American politics, such as the "Religious Right", whose concerns are dominated by religious and sociocultural issues; and American Jews, who are identified as a "strong Democratic group" with liberal views on economics and social issues. The result is that each of these groups votes en bloc in elections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nest algebra** Nest algebra: In functional analysis, a branch of mathematics, nest algebras are a class of operator algebras that generalise the upper-triangular matrix algebras to a Hilbert space context. They were introduced by Ringrose (1965) and have many interesting properties. They are non-selfadjoint algebras, are closed in the weak operator topology and are reflexive. Nest algebra: Nest algebras are among the simplest examples of commutative subspace lattice algebras. Indeed, they are formally defined as the algebra of bounded operators leaving invariant each subspace contained in a subspace nest, that is, a set of subspaces which is totally ordered by inclusion and is also a complete lattice. Since the orthogonal projections corresponding to the subspaces in a nest commute, nests are commutative subspace lattices. Nest algebra: By way of an example, let us apply this definition to recover the finite-dimensional upper-triangular matrices. Let us work in the n -dimensional complex vector space Cn , and let e1,e2,…,en be the standard basis. For j=0,1,2,…,n , let Sj be the j -dimensional subspace of Cn spanned by the first j basis vectors e1,…,ej . Let N={(0)=S0,S1,S2,…,Sn−1,Sn=Cn}; then N is a subspace nest, and the corresponding nest algebra of n × n complex matrices M leaving each subspace in N invariant that is, satisfying MS⊆S for each S in N – is precisely the set of upper-triangular matrices. Nest algebra: If we omit one or more of the subspaces Sj from N then the corresponding nest algebra consists of block upper-triangular matrices. Properties: Nest algebras are hyperreflexive with distance constant 1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Table of Newtonian series** Table of Newtonian series: In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence an written in the form f(s)=∑n=0∞(−1)n(sn)an=∑n=0∞(−s)nn!an where (sn) is the binomial coefficient and (s)n is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus. List: The generalized binomial theorem gives (1+z)s=∑n=0∞(sn)zn=1+(s1)z+(s2)z2+⋯. A proof for this identity can be obtained by showing that it satisfies the differential equation (1+z)d(1+z)sdz=s(1+z)s. The digamma function: ψ(s+1)=−γ−∑n=1∞(−1)nn(sn). The Stirling numbers of the second kind are given by the finite sum {nk}=1k!∑j=0k(−1)k−j(kj)jn. This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0: Δkxn=∑j=0k(−1)k−j(kj)(x+j)n. A related identity forms the basis of the Nörlund–Rice integral: ∑k=0n(nk)(−1)n−ks−k=n!s(s−1)(s−2)⋯(s−n)=Γ(n+1)Γ(s−n)Γ(s+1)=B(n+1,s−n),s∉{0,…,n} where Γ(x) is the Gamma function and B(x,y) is the Beta function. List: The trigonometric functions have umbral identities: cos ⁡πs4 and sin ⁡πs4 The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial (s)n . The first few terms of the sin series are s−(s)33!+(s)55!−(s)77!+⋯ which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn. List: In analytic number theory it is of interest to sum ∑k=0Bkzk, where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as ∑k=0Bkzk=∫0∞e−ttzetz−1dt=∑k=1z(kz+1)2. The general relation gives the Newton series ∑k=0Bk(x)zk(1−sk)s−1=zs−1ζ(s,x+z), where ζ is the Hurwitz zeta function and Bk(x) the Bernoulli polynomial. The series does not converge, the identity holds formally. Another identity is 1Γ(x)=∑k=0∞(x−ak)∑j=0k(−1)k−jΓ(a+j)(kj), which converges for x>a . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent) f(x)=∑k=0(x−ahk)∑j=0k(−1)k−j(kj)f(a+jh).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atomic Age (design)** Atomic Age (design): Atomic Age in design refers to the period roughly corresponding to 1940–1963, when concerns about nuclear war dominated Western society during the Cold War. Architecture, industrial design, commercial design (including advertising), interior design, and fine arts were all influenced by the themes of atomic science, as well as the Space Age, which coincided with that period. Atomic Age design became popular and instantly recognizable, with a use of atomic motifs and space age symbols. Vital forms: Abstract organic forms were identified as a core motif in the 2001 exhibition of Atomic Age design at the Brooklyn Museum of Art, titled "Vital forms: American art and design in the atomic age, 1940–1960". Atomic power was a paradox during the era. It held great promise of technological solutions for the problems facing an increasingly complex world; at the same time, people were fearful of a nuclear armageddon, after the use of atomic weapons at the end of World War II. People were ever-aware of the potential good, and lurking menace, in technology. Science became more visible in the mainstream culture through Atomic Age design. Vital forms: Atomic particles themselves were reproduced in visual design, in areas ranging from architecture to barkcloth patterns. The geometric atomic patterns that were produced in textiles, industrial materials, melamine counter tops, dishware and wallpaper, and many other items, are emblematic of Atomic Age design. The Space Age interests of the public also began showing up in Atomic Age designs, with star and galaxy motifs appearing with the atomic graphics. Vital forms: Free-form biomorphic shapes also appear as a recurring theme in Atomic Age design. British designers at the Council of Industrial Design (CoID) produced fabrics in the early 1950s that showed "skeletal plant forms, drawn in a delicate, spidery graphic form", reflecting x-ray technology that was becoming more widespread and familiar in pop culture. These botanic designs influenced later Atomic Age patterns that included repeating organic shapes similar to cells and organisms viewed through a microscope.There are similarities between many Atomic Age designs and the mid-century modern trend of the same time. Elements of Atomic Age and Space Age design were dominant in the Googie design movement in commercial buildings in the United States. Some streamlined industrial designs also echoed the influence of futurism that had been seen much earlier in Art Deco design. Space Age design: Whereas Atomic Age motifs and structures leaned towards design fields such as architecture and industrial design, Space Age design spread into a broader range of consumer products, including furniture, clothing fashion, and even animation styles, as with the popular television show The Jetsons. Beginning with the dawn of the Space Age (commonly attributed to the launch of Sputnik in October 1957), Space Age design captured the optimism and faith in technology that was felt by much of society during the 1950s and 1960s, together with the design possibilities afforded by newly accessible materials like fibreglass that had become much more widely available since the second world war. Space Age design also had a more vernacular character, appearing in accessible forms that quickly became familiar to mainstream consumers. Since the end of the 1970s, Space Age design has become more closely associated with kitsch and with Googie architecture for popular commercial buildings such as diners, bowling alleys, and shops, though the finest examples of its kind have remained desirable and highly collectible. "Space Age design is closely tied to the pop movement [...] the fusion of popular culture, art, design, and fashion". Space Age design: Fashion Two of the most well-known fashion designers to use Space Age themes in their designs were Pierre Cardin and Paco Rabanne. Pierre Cardin established the futuristic trend of using synthetic and industrial materials in fashion, with "forward thinking" innovations in his early 1960s work. Cardin "popularized the use of everyday materials for fashion items, like vinyl and metal rings for dresses, carpentry nails for brooches, and common decorative effects such as geometric cut-outs, appliqués, large pockets, helmets and oversized buttons". In 1964, Cardin launched his "space age" line, and André Courrèges showed his "Moon Girl" collection, introducing the white go-go boot style and other icons of the 1960s. The Japanese designer, Issey Miyake from Hiroshima, worked in Paris and New York from 1964 to 1970, and used many atomic age forms, and technologically produced materials in his work. In 1970 he moved to Tokyo to continue these innovations. Miyake cites his first encounter with design as being two bridges in his hometown, Hiroshima, at the hypocenter of the atomic bombing in WWII. Space Age design: Vernacular architecture The dingbat apartment house, ubiquitous in the Los Angeles, California area, was built between 1945 through the 1960s, and fused a purist style with googie influence. The architect, Francis Ventre, coined the term "Dingbat (building)" for these quickly built stucco and frame simple structures. These structures often had a single exterior ornament in the shape of a starburst, boomerang, or pattern of rectangles. Space Age design: Architecture The Chemosphere house, designed by John Lautner in 1960, has become an icon of the atomic age home. The octagonal shaped house is cantilevered on a steep slope in the Hollywood Hills, California. At the time, Encyclopædia Britannica cited it as the "most modern home built in the world." Designers: Some of the leading designers who employed the Atomic Age style in their works include: Charles Eames Ray Eames Pierre Koenig Virgil Exner Richard Neutra Eero Saarinen Frank Lloyd Wright Eero Aarnio
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Direct repeat** Direct repeat: Direct repeats are a type of genetic sequence that consists of two or more repeats of a specific sequence. In other words, the direct repeats are nucleotide sequences present in multiple copies in the genome. Generally, a direct repeat occurs when a sequence is repeated with the same pattern downstream. There is no inversion and no reverse complement associated with a direct repeat. It may or may not have intervening nucleotides. The nucleotide sequence written in bold characters signifies the repeated sequence. 5´ TTACGnnnnnnTTACG 3´ 3´ AATGCnnnnnnAATGC 5´Linguistically, a typical direct repeat is comparable to saying "bye-bye". Types: There are several types of repeated sequences : Interspersed (or dispersed) DNA repeats (interspersed repetitive sequences) are copies of transposable elements interspersed throughout the genome. Flanking (or terminal) repeats (terminal repeat sequences) are sequences that are repeated on both ends of a sequence, for example, the long terminal repeats (LTRs) on retroviruses. Direct terminal repeats are in the same direction and inverted terminal repeats are opposite to each other in direction. Tandem repeats (tandem repeat sequences) are repeated copies which lie adjacent to each other. These can also be direct or inverted repeats. The ribosomal RNA and transfer RNA genes belong to the class of middle repetitive DNA. Types: Microsatellite DNA A tract of repetitive DNA in which a motif of a few base pairs is tandemly repeated numerous times (e.g. 5 to 50 times) is referred to as microsatellite DNA. Thus direct repeat tandem sequences are a form of microsattelite DNA. The process of DNA mismatch repair plays a prominent role in the formation of direct trinucleotide repeat expansions. Such repeat expansions underlie several neurological and developmental disorders in humans.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Covalent bond** Covalent bond: A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding. Covalent bond: Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term covalent bond dates from 1939. The prefix co- means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory. Covalent bond: In the molecule H2, the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized. History: The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors."The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms. He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines.Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two.While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms. Types of covalent bonds: Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds.Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule. Covalent structures: There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures. One- and three-electron bonds: Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, H+2. One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron Li+2 than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.The simplest example of three-electron bonding can be found in the helium dimer cation, He+2. It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds. One- and three-electron bonds: Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities. Resonance: There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is 2 + 1 + 1/3 = 4/3. Resonance: Aromaticity In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent. Resonance: Hypervalence Certain molecules such as xenon difluoride and sulfur hexafluoride have higher co-ordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory. Resonance: Electron deficiency In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms, so that the molecules can instead be classified as electron-precise. Resonance: Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated. Quantum mechanical description: After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states. Quantum mechanical description: Comparison of VB and MO theories The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons.The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands.At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene.Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it.Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals. Quantum mechanical description: Covalency from atomic contribution to the electronic density of states In COOP, COHP and BCOOP, evaluation of bond covalency is dependent on the basis set. To overcome this issue, an alternative formulation of the bond covalency can be provided in this way. Quantum mechanical description: The center mass cm(n,l,ml,ms) of an atomic orbital |n,l,ml,ms⟩, with quantum numbers n, l, ml, ms, for atom A is defined as cmA(n,l,ml,ms)=∫E0E1Eg|n,l,ml,ms⟩A(E)dE∫E0E1g|n,l,ml,ms⟩A(E)dE where g|n,l,ml,ms⟩A(E) is the contribution of the atomic orbital |n,l,ml,ms⟩ of the atom A to the total electronic density of states g(E) of the solid g(E)=∑A∑n,l∑ml,msg|n,l,ml,ms⟩A(E) where the outer sum runs over all atoms A of the unit cell. The energy window [E0,E1] is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond. Quantum mechanical description: The relative position CnAlA,nBlB of the center mass of |nA,lA⟩ levels of atom A with respect to the center mass of |nB,lB⟩ levels of atom B is given as CnAlA,nBlB=−|cmA(nA,lA)−cmB(nB,lB)| where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is CA,B=−|cmA−cmB| where, for simplicity, we may omit the dependence from the principal quantum number n in the notation referring to CnAlA,nBlB. Quantum mechanical description: In this formalism, the greater the value of CA,B, the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent A−B bond. The quantity CA,B is denoted as the covalency of the A−B bond, which is specified in the same units of the energy E Analogous effect in nuclear systems: An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common. Sources: "Covalent bonding – Single bonds". chemguide. 2000. Retrieved 2012-02-05. "Electron Sharing and Covalent Bonds". Department of Chemistry University of Oxford. Retrieved 2012-02-05. "Chemical Bonds". Department of Physics and Astronomy, Georgia State University. Retrieved 2012-02-05.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spatial network analysis software** Spatial network analysis software: Spatial network analysis software packages are analytic software used to prepare graph-based analysis of spatial networks. They stem from research fields in transportation, architecture, and urban planning. The earliest examples of such software include the work of Garrison (1962), Kansky (1963), Levin (1964), Harary (1969), Rittel (1967), Tabor (1970) and others in the 1960s and 70s. Specific packages address to suit their domain-specific needs, including TransCAD for transportation, GIS for planning and geography, and Axman for Space syntax researchers. Packages: Many packages are available. Many were developed in academia and are freely available or freely available for academic research. In historical order: Axman – The (near) original developed by Nick Sheep Dalton of UCL to perform axial line analysis on computers running Mac OS, currently used in more than 50 countries. This spawned offshoots such as Pesh (for the analysis of convex space networks) and SpaceBox (for the analysis of 'all-line' axial maps). Spatialist – Developed at Georgia Institute of Technology to implement theoretical innovations principally introduced by John Peponis. This software plugs into the MicroStation CAD package to analyse networks of automatically generated 'e-spaces' and 's-spaces'. Axwoman 1 – Written by Bin Jiang while at UCL. It is a tool to perform axial analysis as a plug-in to ESRI products. Axwoman 6.2+ – Evolved from Axwoman 1.0 and research by Bin Jiang and his team. Axwoman 6.2 is a free plug-in to ArcMap 10, combined with AxialGen in one installer. Featured functionality includes automatically generating natural streets and axial lines from OpenStreetMap data. The website also contains tutorials. Depthmap – Developed by Alasdair Turner of UCL. This software initially generated isovists and performed visibility graph analysis of building systems on computers running Windows. It evolved to include automatic generation of axial line networks and analysis of axial line networks and road segment line networks at anything up to the level of the US or Europe. OmniVista – Developed by Nick Sheep Dalton and Ruth Conroy Dalton. It performs a range of isovist measures on Mac OS computers. Fathom – Commercial implementation of visibility graph analysis written by Intelligent Space Partnership. Mindwalk – Developed by Lucas Figueiredo, This software performs spatial analysis over standard axial maps and new continuity maps. It is written in Java and runs on several platforms. It is also known as xSpace. Mindwalk has been used as a research and teaching tool since 2002. It is distributed worldwide for academic and non-commercial purposes. Isovist Analyst – Written by Sanjay Rana while at UCL. This program creates isovists from building plans as a plug-in to ESRI products. Ajanachara – Open source software developed by Gerald Franz at the Max Planck Institute for Biological Cybernetics. It performs visibility graph analysis of 3D Studio Max and VRML models. Webmap – developed by N. S. Dalton at UCL. Software is free to use (although it requires registration). It allows users to analyse axial maps through a web browser interface. Confeego – Developed by Space Syntax Limited, but available free for academic use. It plugs directly into MapInfo Professional to analyse line axial networks. AJAX – Written by Mike Batty of UCL. It performs both traditional axial network analysis (Batty: primal analysis), and point-based visibility analysis introduced by Bin Jiang (Batty: dual analysis). In a recent paper, Batty showed the mathematical relationship between the two analyses. OverView – plug-in to AutoCad by Christian Derix for Aedas Architects in collaboration with the Center for Evolutionary Computing in Architecture CECA. Allows architects to do quick visual integration mapping via isovist analysis. Can analyse non-planar environments to take volumes and hilly sites into account. AXess – Written by Jennifer Brisbane at the City University of New York. It offers a context menu tool for ArcGIS 9.x that calculates connectivity, control, mean depth, global integration, and local integration for all nodes in an axial line layer. Webmap-At-Home – Written by Nick Sheep Dalton at Open University. It is a Java implementation of the original Axman program with extra features. It is a platform neutral full application capable of reading DXF files and the original Axman binary format. AxialGen – Written by Bin Jiang and Xintao Liu at the University of Gävle, Sweden. It is a plugin to ArcGIS 9.2 that automatically generates axial lines for a complex polygon with holes. Layout-iQ – Used in healthcare, manufacturing, banking, retail, and office space. It evaluates the frequency of flow in a workspace and measures the total minimum travel distance for resources to navigate the workspace. The software integrates CAD drawings with a diagram of flow between points. Packages: Urban Network Analysis Toolbox for ArcGIS – A free, open-source package developed by the City Form Lab. It can be used to compute five types of graph centrality measures on spatial networks: Reach; Gravity; Betweenness; Closeness; and Straightness. The tools incorporate three important features that make them particularly suited for spatial analysis on urban street networks. First, they can account for both geometry and topology in input networks, using either metric distance (e.g. Meters) or topological distance (e.g. Turns) as impedance factors. Unlike previous software tools that operate with two network elements (nodes and edges), UNA tools include a third network element – buildings – that are used as the spatial units of analysis for all measures. Two neighboring buildings on the same street segments can therefore obtain different accessibility results. UNA tools optionally allow buildings to be weighted according to their particular characteristics – e.g., volume, population. More important buildings can be specified to have a stronger effect on the analysis outcomes, yielding more accurate and reliable results. The toolbox is built for easy scaling – it is equally suited for small-scale, detailed network analysis of dense urban areas and for sparser large-scale regional networks. The toolbox requires ArcGIS 10 software with an ArcGIS Network Analyst Extension. Packages: Urban Network Analysis Plugin for Rhinoceros3D – Free for both academic and commercial use. It was developed by the City Form Lab. The Rhino UNA Toolbox can be used to compute five types of graph centrality measures on spatial networks: Reach; Gravity; Betweenness; Closeness; and Straightness. The toolbox includes other spatial analysis tools, such as Closest Facility, Alternative Routes finding, pedestrian flow modeling along shortest or redundant paths, facility patronage estimation, spatial distribution of origins weights along routes etc. The tool allows users to specify which origins and destinations to use in the analysis and allows the users to weight the analysis with specific attributes of each origin and destination. The toolbox was developed to make spatial network analysis tools available to architects, designers and planners who do not have access to GIS and typically work on designs in Rhino. Having UNA metrics in Rhino, allows one to analyze how a specific spatial network performs, and to incorporate the analysis into a fast and iterative design process, where networks can be designed, evaluated and redesigned in seamless cycles. The UNA Rhino toolbox is significantly faster than its GIS counterpart, which has been available as a plugin for ArcGIS since 2012. Users have an ability to rapidly create and edit networks from any Rhino curve objects, making network design and redesign simple and intuitive. The analytic options available to the user have expanded, providing more precise control to analyze exactly what you need for every unique spatial network problem. The toolbox requires Rhinoceros version 5 software as of SR 10 or later. Packages: SSA Plugin – Written by Burak Beyhan at Mersin University. Space Syntax Analysis (SSA) Plugin is operational on free and open source software for GIS (FOSS4GIS) including OpenJUMP, gvSIG, OrbisGIS, QGIS, OpenEV, Thuban, MapWindow GIS, SAGA, and R Project. SSA Plugin calculates the standard space syntax measures including connectivity, total depth, mean depth, global integration, local depth, local integration and control values for each feature involved in a spatial configuration, and intelligibility value for the whole of the configuration. Users can export an adjusted graph to an external file in a Social Network Analysis (SNA) file format for further analysis of the spatial configuration concerned in the respective software environment. Packages: depthmapX – Developed by the depthmapX development team. It is an open source multi-platform spatial network analyses package based on the original Depthmap. It can generate isovists and perform visibility graph analysis of building systems on computers running Mac OS X, Windows and Linux, it includes automatic generation of axial line networks and analysis of axial line networks and road segment line networks. depthmapX is based on Qt framework. Packages: Spatial Design Network Analysis (sDNA 3D) – Since 2011, it has been developed by Alain Chiaradia, Crispin Coop[er and Chris Webster at Cardiff University and University of Hong Kong. It is a free tool that unifies the use of spatial network analysis in design and research. Plugins are provided for Autocad (for designers), ArcGIS, open source QGIS (for analysts & designers) and as a standalone Python version that works on 2D or 3D shapefiles enabling use in other GIS software and custom projects. sDNA standardizes on the network link as a unit of analysis, and computes a wide variety of closeness, betweenness, severance and efficiency measures. sDNA works with 3D topography and 3D multilevel environment. Analyses and Radius can use Euclidean, angular, topological or custom distance metrics, and link, length or custom weightings. In Angular analysis mode sDNA computes cumulative angular change along the link and angular change at junction resulting in angular geodesic. Packages: CASOS ORA – Developed by Dr. Kathleen M. Carley and the joint Carnegie Mellon University and Netanomics team. It is tools that support social and spatial network analysis. It is interoperable with other spatial analysis tools such as ARCGIS. See also Dynamic Network Analysis. Isovist_2-3 – Free software for Mac and PC that provides advanced, real-time, high definition isovist point, path and field analysis in architectural plan and section drawings. The software also includes several Space Syntax analysis tools for calculating high definition Integration, Mean Metric Depth, and Mean Angular Depth fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Recycleddisplays.com** Recycleddisplays.com: Recycleddisplays.com is the creation of Conn Burke, a consultant, display and mannequin supplier in the retail display business since 1971. In the midst of the retail recession of the 1980s, Burke observed that a number of the retailers closing their stores had very high quality store fixtures. He began to purchase and resell these pieces alongside new stock. After further investigation into the display practices of the larger chains, he discovered that a number of these companies were renovating their stores on a regular basis and that good quality store fixtures were ending up in landfills. Burke created recycleddisplays.com to rescue these fixtures from the landfill and make high quality and unique store fixtures available to smaller and start-up retailers.The layout, design and fixturing of a store can play a large role in the success or failure of new retailers. The ability to access the good store displays at a fair price can be essential to an effective launch. Burke also acknowledges the need for stores to constantly be updating their look and provides fixture buy back services, as well as rentals. Additionally, the company is a favorite supplier of the local film and television industry. Many of his finds have also found new lives as furnishings for lofts and condos. The company now carries a mixture of rescued, recycled store fixtures alongside new items. They sell and ship across North America and are starting to branch out into Europe. Awards: 2010 nominated for a Green Toronto Award.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tazos** Tazos: Tazos are disks that were distributed as promotional items with products of Frito-Lay and its subsidiaries around the world. The idea behind Tazos started out similar to Pogs, whereby each Tazo contained a score value, and a game was played to 'win' Tazos from other players. Tazos: Tazos have been released in several different formats, ranging from the original circular disks, to octagonal disks, and in later years, to resemble more of a collectible card. In addition to the Japanese Pog Battle game, some Tazo series feature small incisions around the outside, allowing players to fit them together and build objects. The Star Wars series also included additional pieces which allowed players to construct spaceships. Tazos: Tazos are commonly made from a plastic base, but some series have been produced from cardboard or aluminium (such as the Australian Yu-Gi-Oh! Metallix series). Tazos series: Tazos started out with a set of 100 disks featuring the images of Looney Tunes characters and 124 Tiny Toons tazos in 1994. The disks were added to the products of Mexican snacks company Sabritas and were named after the expression taconazo (to kick with the heel) which was a reference to another popular school game in Mexico where children open bottles with their shoes trying to launch the caps the furthest.Other sets from around the world include: Chiquito de la Calzada (1994, set of 10 in Spain, "chiqui tazos") Disney (1994, set of 91) Pocahontas (1994, set of 50) Los Caballeros del Zodiaco English: Saint Seiya (1994, set of 40) The Simpsons (1995, set of 100) Looney Tunes (1995, set of 100) Monster Munch (1996, set of 30 plus a subsequent set of 10) Sailor Moon (1996, set of 100) The Mask (1998) Pokémon (1999, set of 51) Dinosaurio (2000, set of 51) Pokémon 2 (2000, set of 100) Digimon (2000) Pokémon 3 (2001) Cubi-Tazos featuring Scooby-Doo (TV, 2001, set of 24) Medabots (2002, set of 70 classic and 70 metalix) Dragonball GT (2003, set of 60 metalix) Yu-Gi-Oh! (2004, set of 101) Mucha Lucha (2005, set of 150) Mucha Lucha 2: La Revancha (2005, set of 180) Dragon Ball Z (2005) Megaman NT Warrior (2005 set of 29) Robots (2005, set of 10) Los Simpsons English: The Simpsons (TV, 2006–2007, set of 144) Bob Esponja English: SpongeBob SquarePants (TV, 2007, set of 163 plus 5 in Cheetos) Pokémon 4 Generación Avanzada (2008, set of 235) El Tigre: Las Aventuras De Manny Rivera (2008, set of 200) WWE (TV, 2009, set of 176) Naruto (2009) Bakugan Battle Brawlers (TV, June–August 2009 in packages of Cheetos in India, set of 26) Nickelodeon (TV, 2010, set of 195) Shrek: Para Siempre English: Shrek Forever (film, 2010 in packages of Cheetos in Peru) Bakugan Battle Brawlers (TV, 2010 in Cheetos Sorpresa in Peru with the slogan "Descubre el Poder de los Tazos" English: Discover the Power of Tazos, numbered set of 120) Star Wars (original trilogy) Chester Cheetah Space Jam Australian Football League National Rugby League Beyblades Angry Birds (2011, including codes to unlock special branded levels) Jurassic World (2018) Bad Bunny (2019) Pac-Man (2020) Countries: Tazos have been released around the world, in bags of potato chips (crisps) and other snacks including Bollycao, Cheese Tris, Cheetos, Cheetos Sorpresa, Chizitos, Doritos, Fandangos, Lay's Potato Chips, Meridian Real Thai Chicken Sabritas, Piqueo Snax, Simba Chips, Smith's Potato Crisps, Thins, Twistees, Uncle Chipps, and Walkers. Countries that have had Tazo releases include: Australian sets Below is a list of official basic Australian Tazos and the year they were released. The Yu-Gi-Oh! Metalix set did contain a couple bonus Tazos offered with a kids magazine K-Zone and at Shell petrol stations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eifel Formation** Eifel Formation: The Eifel Formation is a geologic formation in Germany. It preserves fossils dating back to the Paleogene period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TAS classification** TAS classification: The TAS classification can be used to assign names to many common types of volcanic rocks based upon the relationships between the combined alkali content and the silica content. These chemical parameters are useful, because the relative proportions of alkalis and silica play an important role in determining actual mineralogy and normative mineralogy. The classification appears to be and can be simple to use for rocks that have been chemically analyzed. Except for the following quotation from Johannsen (1937), this entry is based upon Le Maitre and others (2002). Use of the TAS classification: TAS stands for Total Alkali Silica. Before using the TAS or any other classification, however, the following words of Johannsen (1937) should be kept in mind. Use of the TAS classification: Many and peculiar are the classifications that have been proposed for igneous rocks. Their variability depends in part upon the purpose for which each was intended, and in part upon the difficulties arising from the characters of the rocks themselves. The trouble is not with the classifications but with nature which did not make things right. … Rocks must be classified in order to compare them with others, previously described, of similar composition and appearance. If this cannot be done on a genetic basis, then an artificial system must answer in order to serve as a card index to rock descriptions. Although this may be an evil thing, it is, at least, the least of several evils.The subtitle of the classification chapter by Johannsen (1937) is "Chacun a son goût" (to each his own taste). Use of the TAS classification: Furthermore, as discussed in considerable detail by Le Maitre and others (2002), the classification cannot be applied to all volcanic rocks. Certain rocks cannot be named using the diagram. For others, additional chemical, mineralogic, or textural criteria must be used, as for lamprophyres. Use of the TAS classification: The TAS classification should be applied only to rocks for which the mineral mode cannot be determined (otherwise, use a scheme based on mineralogy, such as the QAPF diagram or one of the other diagrams presented in the entry for igneous rocks). Before classifying rocks using the TAS diagram, the chemical analyses must be recalculated to 100% excluding water and carbon dioxide. The TAS diagram: The names provided by Le Maitre et al. (2002) for fields in the TAS diagram are listed below. The TAS diagram: B (Basalt) (Use normative mineralogy to subdivide) O1 (Basaltic andesite) O2 (Andesite) O3 (Dacite) R (Rhyolite) T (Trachyte or Trachydacite) (Use normative mineralogy to decide) Ph (Phonolite) S1 (Trachybasalt) *Sodic and potassic variants are Hawaiite and Potassic Trachybasalt S2 (Basaltic trachyandesite) *Sodic and potassic variants are Mugearite and Shoshonite S3 (Trachyandesite *Sodic and potassic variants are Benmoreite and Latite Pc (Picrobasalt) U1 (Basanite or Tephrite) (Use normative mineralogy to decide) U2 (Phonotephrite) U3 (Tephriphonolite) F (Foidite) (Name according to dominant feldspathoid when possible. Melilitites also plot in this area and can be distinguished by additional chemical criteria.) Sodic as used above means that Na2O - 2 is greater than K2O, and potassic that Na2O - 2 is less than K2O. Yet other names have been applied to rocks particularly rich in either sodium or potassium (as are ultrapotassic igneous rocks).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glyceryl behenate** Glyceryl behenate: Glyceryl behenate is a fat used in cosmetics, foods, and oral pharmaceutical formulations. In cosmetics, it is mainly used as a viscosity-increasing agent in emulsions. Glyceryl behenate: In pharmaceutical formulations, glyceryl behenate is mainly used as a tablet and capsule lubricant and as a lipidic coating excipient. It has been investigated for the encapsulation of various drugs such as retinoids. It has also been investigated for use in the preparation of sustained release tablets as a matrix-forming agent for the controlled release of water-soluble drugs and as a lubricant in oral solid dosage formulations. It can also be used as a hot-melt coating agent sprayed onto a powder.It is used widely in cosmetics as non-comedogenic (non- pimple causing substance) ingredient. It does not clog the oil pores of facial skin. Glyceryl behenate: It is also used widely as ingredient for preparation of lipidic nano-particles such as solid lipid nanoparticles (SLN) and nanostructured lipid carriers (NLC). Chemically, glyceryl behenate is a mixture of various esters of behenic acid and glycerol (glycerides). The mixture predominantly contains the diester glyceryl dibehenate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**History of electronic engineering** History of electronic engineering: This article details the history of electronics engineering. Chambers Twentieth Century Dictionary (1972) defines electronics as "The science and technology of the conduction of electricity in a vacuum, a gas, or a semiconductor, and devices based thereon".Electronics engineering as a profession sprang from technological improvements in the telegraph industry during the late 19th century and in the radio and telephone industries during the early 20th century. People gravitated to radio, attracted by the technical fascination it inspired, first in receiving and then in transmitting. Many who went into broadcasting in the 1920s had become "amateurs" in the period before World War I. The modern discipline of electronics engineering was to a large extent born out of telephone-, radio-, and television-equipment development and the large amount of electronic-systems development during World War II of radar, sonar, communication systems, and advanced munitions and weapon systems. In the interwar years, the subject was known as radio engineering. The word electronics began to be used in the 1940s In the late 1950s, the term electronics engineering started to emerge. History of electronic engineering: Electronic laboratories (Bell Labs, for instance) created and subsidized by large corporations in the industries of radio, television, and telephone equipment, began churning out a series of electronic advances. The electronics industry was revolutionized by the inventions of the first transistor in 1948, the integrated circuit chip in 1959, and the silicon MOSFET (metal–oxide–semiconductor field-effect transistor) in 1959. In the UK, the subject of electronics engineering became distinct from electrical engineering as a university-degree subject around 1960. (Before this time, students of electronics and related subjects like radio and telecommunications had to enroll in the electrical engineering department of the university as no university had departments of electronics. Electrical engineering was the nearest subject with which electronics engineering could be aligned, although the similarities in subjects covered (except mathematics and electromagnetism) lasted only for the first year of three-year courses.) Electronics engineering (even before it acquired the name) facilitated the development of many technologies including wireless telegraphy, radio, television, radar, computers, and microprocessors. Wireless telegraphy and radio: Some of the devices which would enable wireless telegraphy were invented before 1900. These include the spark-gap transmitter and the coherer with early demonstrations and published findings by David Edward Hughes (1880) and Heinrich Rudolf Hertz (1887 to 1890) and further additions to the field by Édouard Branly, Nikola Tesla, Oliver Lodge, Jagadish Chandra Bose, and Ferdinand Braun. In 1896, Guglielmo Marconi went on to develop the first practical and widely used radio wave based communication system.Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.In 1904, John Ambrose Fleming, the first professor of electrical Engineering at University College London, invented the first radio tube, the diode. Then, in 1906, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. Electronics is often considered to have begun with the invention of the triode. Within 10 years, the device was used in radio transmitters and receivers as well as systems for long distance telephone calls. Wireless telegraphy and radio: The invention of the triode amplifier, generator, and detector made audio communication by radio practical. (Reginald Fessenden's 1906 transmissions used an electro-mechanical alternator.) In 1912, Edwin H. Armstrong invented the regenerative feedback amplifier and oscillator; he also invented the superheterodyne radio receiver and could be considered the father of modern radio.The first known radio news program was broadcast 31 August 1920 by station 8MK, the unlicensed predecessor of WWJ (AM) in Detroit, Michigan. Regular wireless broadcasts for entertainment commenced in 1922 from the Marconi Research Centre at Writtle near Chelmsford, England. The station was known as 2MT and was followed by 2LO broadcasting from Strand, London. Wireless telegraphy and radio: While some early radios used some type of amplification through electric current or battery, through the mid-1920s the most common type of receiver was the crystal set. In the 1920s, amplifying vacuum tubes revolutionized both radio receivers and transmitters. Wireless telegraphy and radio: Vacuum tubes remained the preferred amplifying device for 40 years, until researchers working for William Shockley at Bell Labs invented the transistor in 1947. In the following years, transistors made small portable radios, or transistor radios, possible as well as allowing more powerful mainframe computers to be built. Transistors were smaller and required lower voltages than vacuum tubes to work. Wireless telegraphy and radio: Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by hand. These non-integrated circuits consumed much space and power, were prone to failure and were limited in speed although they are still common in simple applications. By contrast, integrated circuits packed a large number — often millions — of tiny electrical components, mainly transistors, into a small chip around the size of a coin. Television: In 1927, Philo Farnsworth made the first public demonstration of a purely electronic television. During the 1930s, several countries began broadcasting, and after World War II it spread to millions of receivers, eventually worldwide. Ever since then, electronics have been fully present in television devices. Modern televisions and video displays have evolved from bulky electron tube technology to use more compact devices, such as plasma and liquid-crystal displays. The trend is for even lower power devices such as the organic light-emitting diode displays, and it is most likely to replace the LCD and plasma technologies. Radar and radio location: During World War II, many efforts were expended in the electronic location of enemy targets and aircraft. These included radio beam guidance of bombers, electronic counter measures, early radar systems, etc. During this time, very little if any effort was expended on consumer electronics developments. Transistors and integrated circuits: The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain at the Bell Telephone Laboratories (BTL) in 1947. William Shockley then invented the bipolar junction transistor at BTL in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices. Transistors and integrated circuits: The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958, and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959. Transistors and integrated circuits: The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment.The MOSFET made it possible to build high-density integrated circuit chips.The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has led to revolutionary changes in technology, economy, culture, and thinking. Computers: A computer is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format. Computers: Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into small pocket devices, and can be powered by a small battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous. Computers: The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers ranging from a netbook to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity. Microprocessors: By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released on a single MOS LSI chip in 1971. A single-chip microprocessor was conceived in 1969 by Marcian Hoff. His concept was part of an order by Japanese company Busicom for a desktop programmable electronic calculator, which Hoff wanted to build as cheaply as possible. The first realization of the single-chip microprocessor was the Intel 4004, a 4-bit processor released on a single MOS LSI chip in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. This ignited the development of the personal computer. In 1973, the Intel 8080, an 8-bit processor, made possible the building of the first personal computer, the MITS Altair 8800. The first PC was announced to the general public on the cover of the January 1975 issue of Popular Electronics. Microprocessors: Many electronics engineers today specialize in the development and programming of microprocessor-based electronic systems, known as embedded systems. Hybrid specializations such as computer engineering have emerged due to the detailed knowledge of the hardware that is required for working on such systems. Software engineers typically do not study microprocessors, unlike computer and electronics engineers. Engineers who exclusively carry out the role of programming embedded systems or microprocessors are referred to as "embedded systems engineers" or "firmware engineers".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scalable Networking Pack** Scalable Networking Pack: Scalable Networking Pack (SNP) is a set of additions that adds new features to Microsoft's Windows Server 2003 Service Pack 1 or later with architectural enhancements and APIs to support the new capabilities of network acceleration and hardware-based offload technologies. Features: TCP chimney offload – provides seamlessly integrated support for network adapters with TCP offload engines (TOE) Receive-side scaling – dynamically load-balances inbound network connections across multiple processors or cores NetDMA – enables support for advanced direct memory access technologies, such as Intel I/O Acceleration Technology (Intel I/OAT)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interactive fiction** Interactive fiction: Interactive fiction, often abbreviated IF, is software simulating environments in which players use text commands to control characters and influence the environment. Works in this form can be understood as literary narratives, either in the form of interactive narratives or interactive narrations. These works can also be understood as a form of video game, either in the form of an adventure game or role-playing game. In common usage, the term refers to text adventures, a type of adventure game where the entire interface can be "text-only", however, graphic text adventures still fall under the text adventure category if the main way to interact with the game is by typing text. Some users of the term distinguish between interactive fiction, known as "Puzzle-free", that focuses on narrative, and "text adventures" that focus on puzzles. Interactive fiction: Due to their text-only nature, they sidestepped the problem of writing for widely divergent graphics architectures. This feature meant that interactive fiction games were easily ported across all the popular platforms at the time, including CP/M (not known for gaming or strong graphics capabilities). The number of interactive fiction works is increasing steadily as new ones are produced by an online community, using freely available development systems. Interactive fiction: The term can also be used to refer to analogue versions of literary works that are not read in a linear fashion, known as gamebooks, where the reader is instead given choices at different points in the text; these decisions determine the flow and outcome of the story. The most famous example of this form of printed fiction is the Choose Your Own Adventure book series, and the collaborative "addventure" format has also been described as a form of interactive fiction. The term "interactive fiction" is sometimes used also to refer to visual novels, a type of interactive narrative software popular in Japan. Medium: Text adventures are one of the oldest types of computer games and form a subset of the adventure genre. The player uses text input to control the game, and the game state is relayed to the player via text output. Interactive fiction usually relies on reading from a screen and on typing input, although text-to-speech synthesizers allow blind and visually impaired users to play interactive fiction titles as audio games.Input is usually provided by the player in the form of simple sentences such as "get key" or "go east", which are interpreted by a text parser. Parsers may vary in sophistication; the first text adventure parsers could only handle two-word sentences in the form of verb-noun pairs. Later parsers, such as those built on ZIL (Zork Implementation Language), could understand complete sentences. Later parsers could handle increasing levels of complexity parsing sentences such as "open the red box with the green key then go north". This level of complexity is the standard for works of interactive fiction today. Medium: Despite their lack of graphics, text adventures include a physical dimension where players move between rooms. Many text adventure games boasted their total number of rooms to indicate how much gameplay they offered. These games are unique in that they may create an illogical space, where going north from area A takes you to area B, but going south from area B did not take you back to area A. This can create mazes that do not behave as players expect, and thus players must maintain their own map. These illogical spaces are much more rare in today's era of 3D gaming, and the Interactive Fiction community in general decries the use of mazes entirely, claiming that mazes have become arbitrary 'puzzles for the sake of puzzles' and that they can, in the hands of inexperienced designers, become immensely frustrating for players to navigate. Medium: Interactive fiction shares much in common with Multi-User Dungeons ('MUDs'). MUDs, which became popular in the mid-1980s, rely on a textual exchange and accept similar commands from players as do works of IF; however, since interactive fiction is single player, and MUDs, by definition, have multiple players, they differ enormously in gameplay styles. MUDs often focus gameplay on activities that involve communities of players, simulated political systems, in-game trading, and other gameplay mechanics that are not possible in a single player environment. Medium: Writing style Interactive fiction features two distinct modes of writing: the player input and the game output. As described above, player input is expected to be in simple command form (imperative sentences). A typical command may be:> PULL Lever The responses from the game are usually written from a second-person point of view, in present tense. This is because, unlike in most works of fiction, the main character is closely associated with the player, and the events are seen to be happening as the player plays. While older text adventures often identified the protagonist with the player directly, newer games tend to have specific, well-defined protagonists with separate identities from the player. The classic essay "Crimes Against Mimesis" discusses, among other IF issues, the nature of "You" in interactive fiction. A typical response might look something like this, the response to "look in tea chest" at the start of Curses: That was the first place you tried, hours and hours ago now, and there's nothing there but that boring old book. You pick it up anyway, bored as you are. Many text adventures, particularly those designed for humour (such as Zork, The Hitchhiker's Guide to the Galaxy, and Leather Goddesses of Phobos), address the player with an informal tone, sometimes including sarcastic remarks (see the transcript from Curses, above, for an example). The late Douglas Adams, in designing the IF version of his 'Hitchhiker's Guide to the Galaxy', created a unique solution to the final puzzle of the game: the game requires the one solitary item that the player did not choose at the outset of play. Medium: Some IF works dispense with second-person narrative entirely, opting for a first-person perspective ('I') or even placing the player in the position of an observer, rather than a direct participant. In some 'experimental' IF, the concept of self-identification is eliminated entirely, and the player instead takes the role of an inanimate object, a force of nature, or an abstract concept; experimental IF usually pushes the limits of the concept and challenges many assumptions about the medium. History: 1960s and 70s Natural language processing Though neither program was developed as a narrative work, the software programs ELIZA (1964–1966) and SHRDLU (1968–1970) can formally be considered early examples of interactive fiction, as both programs used natural language processing to take input from their user and respond in a virtual and conversational manner. ELIZA simulated a psychotherapist that appeared to provide human-like responses to the user's input, while SHRDLU employed an artificial intelligence that could move virtual objects around an environment and respond to questions asked about the environment's shape. The development of effective natural language processing would become an essential part of interactive fiction development. History: Adventure Around 1975, Will Crowther, a programmer and an amateur caver, wrote the first text adventure game, Adventure (originally called ADVENT because a filename could only be six characters long in the operating system he was using, and later named Colossal Cave Adventure). Having just gone through a divorce, he was looking for a way to connect with his two young children. Over the course of a few weekends, he wrote a text based cave exploration game that featured a sort of guide/narrator who talked in full sentences and who understood simple two-word commands that came close to natural English. Adventure was programmed in Fortran for the PDP-10. Crowther's original version was an accurate simulation of part of the real Colossal Cave, but also included fantasy elements (such as axe-wielding dwarves and a magic bridge). History: Stanford University graduate student Don Woods discovered Adventure while working at the Stanford Artificial Intelligence Laboratory, and in 1977 obtained and expanded Crowther's source code (with Crowther's permission). Woods's changes were reminiscent of the writings of J. R. R. Tolkien, and included a troll, elves, and a volcano some claim is based on Mount Doom, but Woods says was not.In early 1977, Adventure spread across ARPAnet, and has survived on the Internet to this day. The game has since been ported to many other operating systems, and was included with the floppy-disk distribution of Microsoft's MS-DOS 1.0 OS. Adventure is a cornerstone of the online IF community; there currently exist dozens of different independently programmed versions, with additional elements, such as new rooms or puzzles, and various scoring systems. History: The popularity of Adventure led to the wide success of interactive fiction during the late 1970s, when home computers had little, if any, graphics capability. Many elements of the original game have survived into the present, such as the command 'xyzzy', which is now included as an Easter Egg in modern games, such as Microsoft Minesweeper. Adventure was also directly responsible for the founding of Sierra Online (later Sierra Entertainment); Ken and Roberta Williams played the game and decided to design one of their own, but with graphics. History: Commercial era Adventure International was founded by Scott Adams (not to be confused with the creator of Dilbert). In 1978, Adams wrote Adventureland, which was loosely patterned after (the original) Colossal Cave Adventure. He took out a small ad in a computer magazine in order to promote and sell Adventureland, thus creating the first commercial adventure game. In 1979 he founded Adventure International, the first commercial publisher of interactive fiction. That same year, Dog Star Adventure was published in source code form in SoftSide, spawning legions of similar games in BASIC. History: The largest company producing works of interactive fiction was Infocom, which created the Zork series and many other titles, among them Trinity, The Hitchhiker's Guide to the Galaxy and A Mind Forever Voyaging. History: In June 1977, Marc Blank, Bruce K. Daniels, Tim Anderson, and Dave Lebling began writing the mainframe version of Zork (also known as Dungeon), at the MIT Laboratory for Computer Science, directly inspired by Colossal Cave Adventure. The game was programmed in a computer language called MDL, a variant of LISP. The term Implementer was the self-given name of the creators of the text adventure series Zork. It is for this reason that game designers and programmers can be referred to as an implementer, often shortened to Imp, rather than a writer. In early 1979, the game was completed. Ten members of the MIT Dynamics Modelling Group went on to join Infocom when it was incorporated later that year. In order to make its games as portable as possible, Infocom developed the Z-machine, a custom virtual machine that could be implemented on a large number of platforms, and took standardized "story files" as input. In a non-technical sense, Infocom was responsible for developing the interactive style that would be emulated by many later interpreters. The Infocom parser was widely regarded as the best of its era. It accepted complex, complete sentence commands like "put the blue book on the writing desk" at a time when most of its competitors parsers were restricted to simple two-word verb-noun combinations such as "put book". The parser was actively upgraded with new features like undo and error correction, and later games would 'understand' multiple sentence input: 'pick up the gem and put it in my bag. take the newspaper clipping out of my bag then burn it with the book of matches'. History: Infocom and other companies offered optional commercial feelies (physical props associated with a game). The tradition of 'feelies' (and the term itself) is believed to have originated with Deadline (1982), the third Infocom title after Zork I and II. When writing this game, it was not possible to include all of the information in the limited (80KB) disk space, so Infocom created the first feelies for this game; extra items that gave more information than could be included within the digital game itself. These included police interviews, the coroner's findings, letters, crime scene evidence and photos of the murder scene. History: These materials were very difficult for others to copy or otherwise reproduce, and many included information that was essential to completing the game. Seeing the potential benefits of both aiding game-play immersion and providing a measure of creative copy-protection, in addition to acting as a deterrent to software piracy, Infocom and later other companies began creating feelies for numerous titles. In 1987, Infocom released a special version of the first three Zork titles together with plot-specific coins and other trinkets. This concept would be expanded as time went on, such that later game feelies would contain passwords, coded instructions, page numbers, or other information that would be required to successfully complete the game. History: 1980s United States Interactive fiction became a standard product for many software companies. By 1982 Softline wrote that "the demands of the market are weighted heavily toward hi-res graphics" in games like Sierra's The Wizard and the Princess and its imitators. Such graphic adventures became the dominant form of the genre on computers with graphics, like the Apple II. By 1982 Adventure International began releasing versions of its games with graphics. The company went bankrupt in 1985. Synapse Software and Acornsoft were also closed in 1985. Leaving Infocom as the leading company producing text-only adventure games on the Apple II with sophisticated parsers and writing, and still advertising its lack of graphics as a virtue. The company was bought by Activision in 1986 after the failure of Cornerstone, Infocom's database software program, and stopped producing text adventures a few years later. Soon after Telaium/Trillium also closed. History: Outside the United States Probably the first commercial work of interactive fiction produced outside the U.S. was the dungeon crawl game of Acheton, produced in Cambridge, England, and first commercially released by Acornsoft (later expanded and reissued by Topologika). Other leading companies in the UK were Magnetic Scrolls and Level 9 Computing. Also worthy of mention are Delta 4, Melbourne House, and the homebrew company Zenobi. History: In the early 1980s Edu-Ware also produced interactive fiction for the Apple II as designated by the "if" graphic that was displayed on startup. Their titles included the Prisoner and Empire series (Empire I: World Builders, Empire II: Interstellar Sharks, Empire III: Armageddon). History: In 1981, CE Software published SwordThrust as a commercial successor to the Eamon gaming system for the Apple II. SwordThrust and Eamon were simple two-word parser games with many role-playing elements not available in other interactive fiction. While SwordThrust published seven different titles, it was vastly overshadowed by the non-commercial Eamon system which allowed private authors to publish their own titles in the series. By March 1984, there were 48 titles published for the Eamon system (and over 270 titles in total as of March 2013). History: In Italy, interactive fiction games were mainly published and distributed through various magazines in included tapes. The largest number of games were published in the two magazines Viking and Explorer, with versions for the main 8-bit home computers (ZX Spectrum, Commodore 64, and MSX). The software house producing those games was Brainstorm Enterprise, and the most prolific IF author was Bonaventura Di Bello, who produced 70 games in the Italian language. The wave of interactive fiction in Italy lasted for a couple of years thanks to the various magazines promoting the genre, then faded and remains still today a topic of interest for a small group of fans and less known developers, celebrated on Web sites and in related newsgroups. History: In Spain, interactive fiction was considered a minority genre, and was not very successful. The first Spanish interactive fiction commercially released was Yenght in 1983, by Dinamic Software, for the ZX Spectrum. Later on, in 1987, the same company produced an interactive fiction about Don Quijote. After several other attempts, the company Aventuras AD, emerged from Dinamic, became the main interactive fiction publisher in Spain, including titles like a Spanish adaptation of Colossal Cave Adventure, an adaptation of the Spanish comic El Jabato, and mainly the Ci-U-Than trilogy, composed by La diosa de Cozumel (1990), Los templos sagrados (1991) and Chichen Itzá (1992). During this period, the Club de Aventuras AD (CAAD), the main Spanish speaking community around interactive fiction in the world, was founded, and after the end of Aventuras AD in 1992, the CAAD continued on its own, first with their own magazine, and then with the advent of Internet, with the launch of an active internet community that still produces interactive non-commercial fiction nowadays. History: During the 1990s Legend Entertainment was founded by Bob Bates and Mike Verdu in 1989. It started out from the ashes of Infocom. The text adventures produced by Legend Entertainment used (high-resolution) graphics as well as sound. Some of their titles include Eric the Unready, the Spellcasting series and Gateway (based on Frederik Pohl's novels). History: The last text adventure created by Legend Entertainment was Gateway II (1992), while the last game ever created by Legend was Unreal II: The Awakening (2003) – a well-known first-person shooter action game using the Unreal Engine for both impressive graphics and realistic physics. In 2004, Legend Entertainment was acquired by Atari, who published Unreal II and released for both Microsoft Windows and Microsoft's Xbox. History: Many other companies such as Level 9 Computing, Magnetic Scrolls and Delta 4 had closed by 1992. In 1991 and 1992, Activision released The Lost Treasures of Infocom in two volumes, a collection containing most of Infocom's games, followed in 1996 by Classic Text Adventure Masterpieces of Infocom. History: Modern era After the decline of the commercial interactive fiction market in the 1990s, an online community eventually formed around the medium. In 1987, the Usenet newsgroup rec.arts.int-fiction was created, and was soon followed by rec.games.int-fiction. By custom, the topic of rec.arts.int-fiction is interactive fiction authorship and programming, while rec.games.int-fiction encompasses topics related to playing interactive fiction games, such as hint requests and game reviews. As of late 2011, discussions between writers have mostly moved from rec.arts.int-fiction to the Interactive Fiction Community Forum.One of the most important early developments was the reverse-engineering of Infocom's Z-Code format and Z-Machine virtual machine in 1987 by a group of enthusiasts called the InfoTaskForce and the subsequent development of an interpreter for Z-Code story files. As a result, it became possible to play Infocom's work on modern computers. History: For years, amateurs with the IF community produced interactive fiction works of relatively limited scope using the Adventure Game Toolkit and similar tools. History: The breakthrough that allowed the interactive fiction community to truly prosper, however, was the creation and distribution of two sophisticated development systems. In 1987, Michael J. Roberts released TADS, a programming language designed to produce works of interactive fiction. In 1993, Graham Nelson released Inform, a programming language and set of libraries which compiled to a Z-Code story file. Each of these systems allowed anyone with sufficient time and dedication to create a game, and caused a growth boom in the online interactive fiction community. History: Despite the lack of commercial support, the availability of high quality tools allowed enthusiasts of the genre to develop new high quality games. Competitions such as the annual Interactive Fiction Competition for short works, the Spring Thing for longer works, and the XYZZY Awards, further helped to improve the quality and complexity of the games. Modern games go much further than the original "Adventure" style, improving upon Infocom games, which relied extensively on puzzle solving, and to a lesser extent on communication with non-player characters, to include experimentation with writing and story-telling techniques. History: While the majority of modern interactive fiction that is developed is distributed for free, there are some commercial endeavors. In 1998, Michael Berlyn, a former Implementor at Infocom, started a new game company, Cascade Mountain Publishing, whose goals were to publish interactive fiction. Despite the Interactive Fiction community providing social and financial backing Cascade Mountain Publishing went out of business in 2000. History: Other commercial endeavours include Peter Nepstad's 1893: A World's Fair Mystery, several games by Howard Sherman published as Malinche Entertainment, The General Coffee Company's Future Boy!, Cypher, a graphically enhanced cyberpunk game and various titles by Textfyre. Emily Short was commissioned to develop the game City of Secrets but the project fell through and she ended up releasing it herself. History: Artificial Intelligence The increased effectiveness of natural-language-generation in artificial intelligence (AI) has led to instances of interactive fiction which use AI to dynamically generate new, open-ended content, instead of being constrained to pre-written material. The most notable example of this is AI Dungeon, released in 2019, which generates content using the GPT-3 (previously GPT-2) natural-language-generating neural network, created by OpenAI. Notable works: 1970s Colossal Cave Adventure, by Will Crowther and Don Woods, was the first text adventure ever made. Adventureland, by Scott Adams, is considered one of the defining works of interactive fiction. The Zork series by Infocom (1979 onwards) was the first text adventure to see widespread commercial release. 1980s Softporn Adventure, by Chuck Benton, a popular adult game that inspired the Leisure Suit Larry video game series. The Hobbit, by Philip Mitchell and Veronika Megler of Beam Software (1982) was an early reinterpretation of an existing novel into interactive fiction, with several independent non-player characters. Planetfall, by Steve Meretzky of Infocom (1983), featured Floyd the robot, which Allen Varney claimed to be the first game character who evoked a strong emotional commitment from players. Suspended by Michael Berlyn was an Infocom game with a large vocabulary and unique character personalities. The Hitchhiker's Guide to the Galaxy, by Douglas Adams and Steve Meretzky of Infocom (1984), involved the author of the original work in the reinterpretation. A Mind Forever Voyaging, by Steve Meretzky of Infocom (1985), a story-heavy, puzzle-light game often touted as Infocom's first serious work of science fiction. The Pawn, by Magnetic Scrolls was known for understanding complex instructions like 'PLANT THE POT PLANT IN THE PLANT POT WITH THE TROWEL'. Silicon Dreams, by Level 9 Computing (1986), a trilogy of interactive science fiction games. Leather Goddesses of Phobos by Steve Meretzky, a risqué sci-fi parody from Infocom. Amnesia (1987), by Hugo Award and Nebula Award winning science fiction and fantasy author Thomas M. Disch, a text-only adventure published by Electronic Arts. 1990s Curses, by Graham Nelson (1993), the first game written in the Inform programming language. Considered one of the first "modern" games to meet the high standards set by Infocom's best titles. DUNNET, by Ron Schnell (1992 eLisp port from the 1983 MacLisp original), surreal text adventure that has shipped with GNU Emacs since 1994, and thus comes with Mac OS X and most Linux distributions; often mistaken for an easter egg. Anchorhead, by Michael S. Gentry (1998) is a highly rated horror story inspired by H. P. Lovecraft's Cthulhu Mythos. Photopia, by Adam Cadre (1998), one of the first almost entirely puzzle-free games. It won the annual Interactive Fiction Competition in 1998. Spider and Web, by Andrew Plotkin (1998), an award-winning espionage story with many twists and turns. Varicella by Adam Cadre (1999). It won four XYZZY Awards in 1999 including the XYZZY Award for Best Game, and had a scholarly essay written about it. 2000s Galatea, by Emily Short (2000). Galatea is focused entirely on interaction with the animated statue of the same name. Galatea has one of the most complex interaction systems for a non-player character in an interactive fiction game. Adam Cadre called Galatea "the best NPC ever". 9:05 by Adam Cadre. It is commonly seen as an easy gateway for people to get involved with interactive fiction. Slouching Towards Bedlam, by Star C. Foster and Daniel Ravipinto (2003). Set in a steampunk setting, the game integrates meta-game functionality (saving, restoring, restarting) into the game world itself. The game won four XYZZY Awards. The Dreamhold, by Andrew Plotkin (2004). Designed as a tutorial game for those new to IF, it provides an extensive help section. Façade by Michael Mateas, Andrew Stern and John Grieve (2005). An interactive drama using natural language processing. Fallen London, also known as Echo Bazaar, an open-world work of interactive fiction, by Failbetter Games Lost Pig by Admiral Jota (2007). A comedic interactive fiction about an orc finding a pig that escaped from his farm. It won best game, best writing, best individual non-player character, and best individual player character in the 2007 XYZZY Awards. 2010s Howling Dogs by Porpentine (2012), hypertext fiction that explores escapism. It is considered one of the most prominent Twine games and was in the 2017 Whitney Biennial. A Dark Room by Michael Townsend (2013), text-based mystery story and idle game. The story is only told through environmental cues, rather than dialogue or exposition. 80 Days by inkle (2014). An interactive adventure based on the novel by Jules Verne, it was nominated by TIME as their Game of the Year for 2014. Depression Quest by Zoë Quinn (2014). Text-based game in which players take the place of a character who is clinically depressed. The release of the game is considered to be the catalyst of the Gamergate controversy. AI Dungeon by Nick Walton (2019). It is notable for using artificial intelligence to dynamically generate an essentially unlimited amount of content. Software: Development systems The original interactive fiction Colossal Cave Adventure was programmed in Fortran, originally developed by IBM. Adventure's parsers could only handle two-word sentences in the form of verb-noun pairs. Software: Infocom's games of 1979–88, such as Zork, were written using a LISP-like programming language called ZIL (Zork Implementation Language or Zork Interactive Language, it was referred to as both) that compiled into a byte code able to run on a standardized virtual machine called the Z-machine. As the games were text based and used variants of the same Z-machine interpreter, the interpreter only had to be ported to a computer once, rather than once each game. Each game file included a sophisticated parser which allowed the user to type complex instructions to the game. Unlike earlier works of interactive fiction which only understood commands of the form 'verb noun', Infocom's parser could understand a wider variety of sentences. For instance one might type "open the large door, then go west", or "go to the hall". With the Z-machine, Infocom was able to release most of their games for most popular home computers of the time simultaneously, including Apple II, Atari 8-bit family, IBM PC compatibles, Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Commodore Plus/4, Commodore 128, Kaypro CP/M, TI-99/4A, Macintosh, Atari ST, the Amiga and the Radio Shack TRS-80. Infocom was also known for shipping creative props, or "feelies" (and even "smellies"), with its games. Software: During the 1990s Interactive fiction was mainly written with C-like languages, such as TADS 2 and Inform 6. A number of systems for writing interactive fiction now exist. The most popular remain Inform, TADS, or ADRIFT, but they diverged in their approach to IF-writing during the 2000s, giving today's IF writers an objective choice. By 2006 IFComp, most games were written for Inform, with a strong minority of games for TADS and ADRIFT, followed by a small number of games for other systems.While familiarity with a programming language leads many new authors to attempt to produce their own complete IF application, most established IF authors recommend use of a specialised IF language, arguing that such systems allow authors to avoid the technicalities of producing a full featured parser, while allowing broad community support. The choice of authoring system usually depends on the author's desired balance of ease of use versus power, and the portability of the final product.Other development systems include: David Malmberg's Adventure Game Toolkit (AGT) Incentive Software's Graphic Adventure Creator (GAC) Inkle's inklewriter Professional Adventure Writer Gilsoft's The Quill Twine Interpreters and virtual machines Interpreters are the software used to play the works of interactive fiction created with a development system. Since they need to interact with the player, the "story files" created by development systems are programs in their own right. Rather than running directly on any one computer, they are programs run by Interpreters, or virtual machines, which are designed specially for IF. They may be part of the development system, or can be compiled together with the work of fiction as a standalone executable file. Software: The Z-machine was designed by the founders of Infocom, in 1979. They were influenced by the then-new idea of a virtual Pascal computer, but replaced P with Z for Zork, the celebrated adventure game of 1977–79. The Z-machine evolved during the 1980s but over 30 years later, it remains in use essentially unchanged. Glulx was designed by Andrew Plotkin in the late 1990s as a new-generation IF virtual machine. It overcomes the technical constraint on the Z-machine by being a 32-bit rather than 16-bit processor. Frotz is a modern Z-machine interpreter originally written in C (programming language) by Stefan Jokisch in 1995 for DOS. Over time it was ported to other platforms, such as Unix, RISC OS, Mac OS and most recently iOS. Modern Glulx interpreters are based on "Glulxe", by Andrew Plotkin, and "Git", by Iain Merrick. Other interpreters include Zoom for Mac OS X, or for Unix or Linux, maintained by Andrew Hunter, and Spatterlight for Mac OS X, maintained by Tor Andersson. Software: Distribution In addition to commercial distribution venues and individual websites, many works of free interactive fiction are distributed through community websites. These include the Interactive Fiction Database (IFDb), The Interactive Fiction Reviews Organization (IFRO), a game catalog and recommendation engine, and the Interactive Fiction Archive. Software: Works may be distributed for playing with in a separate interpreter. In which case they are often made available in the Blorb package format that many interpreters support. A filename ending .zblorb is a story file intended for a Z-machine in a Blorb wrapper, while a filename ending .gblorb is a story file intended for a Glulx in a Blorb wrapper. It is not common but IF files are sometimes also seen without a Blorb wrapping, though this usually means cover art, help files, and so forth are missing, like a book with the covers torn off. Z-machine story files usually have names ending .z5 or .z8, the number being a version number, and Glulx story files usually end .ulx. Software: Alternatively, works may be distributed for playing in a web browser. For example, the 'Parchment' project is for web browser-based IF Interpreter, for both Z-machine and Glulx files. Some software such as Twine publishes directly to HTML, the standard language used to create web pages, reducing the requirement for an Interpreter or virtual machine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PTPN14** PTPN14: Tyrosine-protein phosphatase non-receptor type 14 is an enzyme that in humans is encoded by the PTPN14 gene. Function: The protein encoded by this gene is a member of the PTP family and PTPN14 subfamily of tyrosine protein phosphatases. PTPs are known to be signalling molecules that regulate a variety of cellular processes including cell growth, differentiation, mitotic cycle, and oncogenic transformation. This PTP contains an N-terminal noncatalytic domain similar to that of band 4.1 superfamily cytoskeleton-associated proteins, which suggested the membrane or cytoskeleton localization of this protein. The specific function of this PTP has not yet been determined. Interactions: PTPN14 has been shown to interact with Beta-catenin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Color of water** Color of water: The color of water varies with the ambient conditions in which that water is present. While relatively small quantities of water appear to be colorless, pure water has a slight blue color that becomes deeper as the thickness of the observed sample increases. The hue of water is an intrinsic property and is caused by selective absorption and scattering of blue light. Dissolved elements or suspended impurities may give water a different color. Intrinsic color: The intrinsic color of liquid water may be demonstrated by looking at a white light source through a long pipe that is filled with purified water and closed at both ends with a transparent window. The light cyan color is caused by weak absorption in the red part of the visible spectrum.Absorptions in the visible spectrum are usually attributed to excitations of electronic energy states in matter. Water is a simple three-atom molecule, H2O, and all its electronic absorptions occur in the ultraviolet region of the electromagnetic spectrum and are therefore not responsible for the color of water in the visible region of the spectrum. The water molecule has three fundamental modes of vibration. Two stretching vibrations of the O–H bonds in the gaseous state of water occur at v1 = 3650 cm−1 and v3 = 3755 cm−1. Absorption due to these vibrations occurs in the infrared region of the spectrum. The absorption in the visible spectrum is due mainly to the harmonic v1 + 3v3 = 14,318 cm−1, which is equivalent to a wavelength of 698 nm. In liquid state at 20°C these vibrations are red-shifted by hydrogen bonding, resulting in red absorption at 740 nm, other harmonics such as v1 + v2 + 3v3 giving red absorption at 660 nm. The absorption curve for heavy water (D2O) is of a similar shape, but is shifted further towards the infrared end of the spectrum, because the vibrational transitions have a lower energy. For this reason, heavy water does not absorb red light and thus large bodies of D2O would lack the characteristic cyan color of the more commonly found light water (1H2O).Absorption intensity decreases markedly with each successive overtone, resulting in very weak absorption for the third overtone. For this reason, the pipe needs to have a length of a meter or more and the water must be purified by microfiltration to remove any particles that could produce Mie scattering. Color of lakes and oceans: Lakes and oceans appear cyan for several reasons. One is that the surface of the water reflects the color of the sky, which ranges from cyan to light azure. It is a common misconception that this reflection is the sole reason bodies of water appear cyan, though it can contribute. This contribution usually makes the body of water appear more a shade of azure rather than cyan depending on how bright the sky is. Water in swimming pools with white-painted sides and bottom will appear cyan, even in indoor pools where there is no sky to be reflected. The deeper the pool, the more intense the cyan color becomes.Some of the light hitting the surface of ocean is reflected but most of it penetrates the water surface, interacting with water molecules and other substances in the water. Water molecules can vibrate in three different modes when they interact with light. The red, orange, and yellow wavelengths of light are absorbed so the remaining light seen is composed of green, cyan, and blue wavelengths. This is the main reason the ocean's color is cyan. The relative contribution of reflected skylight and the light scattered back from the depths is strongly dependent on observation angle. Color of lakes and oceans: Scattering from suspended particles also plays an important role in the color of lakes and oceans, causing the water to look greener or bluer in different areas. A few tens of meters of water will absorb all light, so without scattering, all bodies of water would appear black. Because most lakes and oceans contain suspended living matter and mineral particles, light from above is scattered and some of it is reflected upwards. Scattering from suspended particles would normally give a white color, as with snow, but because the light first passes through many meters of cyan-colored liquid, the scattered light appears cyan. In extremely pure water—as is found in mountain lakes, where scattering from particles is very low—the scattering from water molecules themselves also contributes a cyan color.Diffuse sky radiation due to Rayleigh scattering in the atmosphere along one's line of sight gives distant objects a cyan or light azure tint. This is most commonly noticed with distant mountains, but also contributes to the cyanness of the ocean in the distance. Color of glaciers: Glaciers are large bodies of ice and snow formed in cold climates by processes involving the compaction of fallen snow. While snowy glaciers appear white from a distance, up close and when shielded from direct ambient light, glaciers usually appear a deep blue due to the long path lengths of the internal reflected light.Relatively small amounts of regular ice appear white because plenty of air bubbles are present, and also because small quantities of water appear to be colorless. In glaciers, on the other hand, the pressure causes the air bubbles, trapped in the accumulated snow, to be squeezed out increasing the density of the created ice. Large quantities of water appear cyan, therefore a large piece of compressed ice, or a glacier, would also appear cyan. Color of water samples: Dissolved and particulate material in water can cause it to be appear more green, tan, brown, or red. For instance, dissolved organic compounds called tannins can result in dark brown colors, or algae floating in the water (particles) can impart a green color. Color variations can be measured with reference to a standard color scale. Two examples of standard color scales for natural water bodies are the Forel-Ule scale and the Platinum-Cobalt scale. For example, slight discoloration is measured against the Platinum-Cobalt scale in Hazen units (HU).The color of a water sample can be reported as: Apparent color is the color of a body of water being reflected from the surface of the water, and consists of color from both dissolved and suspended components. Apparent color may also be changed by variations in sky color or the reflection of nearby vegetation. Color of water samples: True color is measured after a sample of water has been collected and purified (either by centrifuging or filtration). Pure water tends to look cyan in color and a sample can be compared to pure water with a predetermined color standard or comparing the results of a spectrophotometer.Testing for color can be a quick and easy test which often reflects the amount of organic material in the water, although certain inorganic components like iron or manganese can also impart color. Color of water samples: Water color can reveal physical, chemical and bacteriological conditions. In drinking water, green can indicate copper leaching from copper plumbing and can also represent algae growth. Blue can also indicate copper, or might be caused by syphoning of industrial cleaners in the tank of commodes, commonly known as backflowing. Reds can be signs of rust from iron pipes or airborne bacteria from lakes, etc. Black water can indicate growth of sulfur-reducing bacteria inside a hot water tank set to too low a temperature. This usually has a strong sulfur or rotten egg (2S) odor and is easily corrected by draining the water heater and increasing the temperature to 49 °C (120 °F) or higher. The odor will always be in the hot water pipes if sulfate reducing bacteria are the cause and never in the cold water plumbing. Learning the water impurity indication color spectrum can make identifying and solving cosmetic, bacteriological and chemical problems easier. Water quality and color: The presence of color in water does not necessarily indicate that the water is not drinkable. Water with high water clarity is generally more cyan in color due to low concentrations of particles and/or dissolved substances. Color-causing particulate substances can be easily removed by filtration. Color-causing dissolved substances such as tannins are only toxic to animals in large concentration.Color from dissolved substances is not removed by typical water filters; however the use of coagulants may succeed in trapping the color-causing compounds within the resulting precipitate. Water quality and color: Other factors can affect the color seen: Particles and solutes can absorb light, as in tea or coffee. Green algae in rivers and streams often lend a blue-green color. The Red Sea has occasional blooms of red Trichodesmium erythraeum algae. Water quality and color: Particles in water can scatter light. The Colorado River is often muddy red because of suspended reddish silt in the water—this gives the river its name, from Spanish colorado, 'colored, red'. Some mountain lakes and streams with finely ground rock, such as glacial flour, are turquoise. Light scattering by suspended matter is required in order that the blue light produced by water's absorption can return to the surface and be observed. Such scattering can also shift the spectrum of the emerging photons toward the green, a color often seen when water laden with suspended particles is observed. Color names: Various cultures divide the semantic field of colors differently from the English language usage and some do not distinguish between blue and green in the same way. An example is Welsh where glas can mean blue or green, or Vietnamese where xanh likewise can mean either. Conversely, in Russian and some other languages, there is no single word for blue, but rather different words for light blue (голубой, goluboy) and dark blue (синий, siniy). Color names: Other color names assigned to bodies of water are sea green and ultramarine blue. Unusual oceanic colorings have given rise to the terms red tide and black tide. Color names: The Ancient Greek poet Homer uses the epithet "wine-dark sea"; in addition, he also describes the sea as "grey". William Ewart Gladstone has suggested that this is due to the Ancient Greeks classifying colors primarily by luminosity rather than hue, while others believe Homer was color blind.The Ancient Indian Wisdom of Veda consider life giving contributions of water a part of divine and recognize water as a primeval God Varuna; and the color of Varuna is described as blue. In the Gayatri associated with Varuna, the word "neela purusha" comes in second line which calls the water deity, the blue one.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluconazole** Fluconazole: Fluconazole is an antifungal medication used for a number of fungal infections. This includes candidiasis, blastomycosis, coccidioidomycosis, cryptococcosis, histoplasmosis, dermatophytosis, and pityriasis versicolor. It is also used to prevent candidiasis in those who are at high risk such as following organ transplantation, low birth weight babies, and those with low blood neutrophil counts. It is given either by mouth or by injection into a vein.Common side effects include vomiting, diarrhea, rash, and increased liver enzymes. Serious side effects may include liver problems, QT prolongation, and seizures. During pregnancy it may increase the risk of miscarriage while large doses may cause birth defects. Fluconazole is in the azole antifungal family of medication. It is believed to work by affecting the fungal cellular membrane.Fluconazole was patented in 1981 and came into commercial use in 1988. It is on the World Health Organization's List of Essential Medicines. Fluconazole is available as a generic medication. In 2020, it was the 174th most commonly prescribed medication in the United States, with more than 3 million prescriptions. Medical uses: Fluconazole is a first-generation triazole antifungal medication. It differs from earlier azole antifungals (such as ketoconazole) in that its structure contains a triazole ring instead of an imidazole ring. While the imidazole antifungals are mainly used topically, fluconazole and certain other triazole antifungals are preferred when systemic treatment is required because of their improved safety and predictable absorption when administered orally.Fluconazole's spectrum of activity includes most Candida species (but not Candida krusei or Candida glabrata), Cryptococcus neoformans, some dimorphic fungi, and dermatophytes, among others. Common uses include: The treatment of non-systemic Candida infections of the vagina ("yeast infections"), throat, and mouth. Medical uses: Certain systemic Candida infections in people with healthy immune systems, including infections of the bloodstream, kidney, or joints. Other antifungals are usually preferred when the infection is in the heart or central nervous system, and for the treatment of active infections in people with weak immune systems. The prevention of Candida infections in people with weak immune systems, such as those neutropenic due to cancer chemotherapy, those with advanced HIV infections, transplant patients, and premature infants. As a second-line agent for the treatment of cryptococcal meningoencephalitis, a fungal infection of the central nervous system. Medical uses: Resistance Antifungal resistance to drugs in the azole class tends to occur gradually over the course of prolonged drug therapy, resulting in clinical failure in immunocompromised patients (e.g., patients with advanced HIV receiving treatment for thrush or esophageal Candida infection).In C. albicans, resistance occurs by way of mutations in the ERG11 gene, which codes for 14α-demethylase. These mutations prevent the azole drug from binding, while still allowing binding of the enzyme's natural substrate, lanosterol. Development of resistance to one azole in this way will confer resistance to all drugs in the class. Another resistance mechanism employed by both C. albicans and C. glabrata is increasing the rate of efflux of the azole drug from the cell, by both ATP-binding cassette and major facilitator superfamily transporters. Other gene mutations are also known to contribute to development of resistance. C. glabrata develops resistance by up regulating CDR genes, and resistance in C. krusei is mediated by reduced sensitivity of the target enzyme to inhibition by the agent.The full spectrum of fungal susceptibility and resistance to fluconazole can be found in the TOKU-E's product data sheet. According to the United States Centers for Disease Control, fluconazole resistance among Candida strains in the U.S. is about 7%. Contraindications: Fluconazole is contraindicated in patients who: Drink alcohol have known hypersensitivity to other azole medicines such as ketoconazole; are taking terfenadine, if 400 mg per day multidose of fluconazole is administered; concomitant administration of fluconazole and quinidine, especially when fluconazole is administered in high dosages; take SSRIs such as fluoxetine or sertraline. Side effects: Adverse drug reactions associated with fluconazole therapy include: Common (≥1% of patients): rash, headache, dizziness, nausea, vomiting, abdominal pain, diarrhea, and/or elevated liver enzymes Infrequent (0.1–1% of patients): anorexia, fatigue, constipation Rare (<0.1% of patients): oliguria, hypokalaemia, paraesthesia, seizures, alopecia, Stevens–Johnson syndrome, thrombocytopenia, other blood dyscrasias, serious hepatotoxicity including liver failure, anaphylactic/anaphylactoid reactions Very rare: prolonged QT interval, torsades de pointes FDA is now saying treatment with chronic, high doses of fluconazole during the first trimester of pregnancy may be associated with a rare and distinct set of birth defects in infants.If taken during pregnancy it may result in harm. These cases of harm, however, were only in women who took large doses for most of the first trimester.Fluconazole is secreted in human milk at concentrations similar to plasma. Therefore, the use of fluconazole in lactating mothers is not recommended.Fluconazole therapy has been associated with QT interval prolongation, which may lead to serious cardiac arrhythmias. Thus, it is used with caution in patients with risk factors for prolonged QT interval, such as electrolyte imbalance or use of other drugs that may prolong the QT interval (particularly cisapride and pimozide).Fluconazole has also rarely been associated with severe or lethal hepatotoxicity, so liver function tests are usually performed regularly during prolonged fluconazole therapy. In addition, it is used with caution in patients with pre-existing liver disease.Some people are allergic to azoles, so those allergic to other azole drugs might be allergic to fluconazole. That is, some azole drugs have adverse side-effects. Some azole drugs may disrupt estrogen production in pregnancy, affecting pregnancy outcome. Side effects: Fluconazole taken at low doses is in FDA pregnancy category C. However, high doses have been associated with a rare and distinct set of birth defects in infants. If taken at these doses, the pregnancy category is changed from category C to category D. Pregnancy category D means there is positive evidence of human fetal risk based on human data. In some cases, the potential benefits from use of the drug in pregnant women with serious or life-threatening conditions may be acceptable despite its risks. Fluconazole should not be taken during pregnancy or if one could become pregnant during treatment without first consulting a doctor. Oral fluconazole is not associated with a significantly increased risk of birth defects overall, although it does increase the odds ratio of tetralogy of Fallot, but the absolute risk is still low. Women using fluconazole during pregnancy have a 50% higher risk of spontaneous abortion.Fluconazole should not be taken with cisapride (Propulsid) due to the possibility of serious, even fatal, heart problems. In rare cases, severe allergic reactions including anaphylaxis may occur.Powder for oral suspension contains sucrose and should not be used in patients with hereditary fructose, glucose/galactose malabsorption or sucrase-isomaltase deficiency. Capsules contain lactose and should not be given to patients with rare hereditary problems of galactose intolerance, Lapp lactase deficiency, or glucose-galactose malabsorption Interactions: Fluconazole is an inhibitor of the human cytochrome P450 system, particularly the isozyme CYP2C19 (CYP3A4 and CYP2C9 to lesser extent) In theory, therefore, fluconazole decreases the metabolism and increases the concentration of any drug metabolised by these enzymes. In addition, its potential effect on QT interval increases the risk of cardiac arrhythmia if used concurrently with other drugs that prolong the QT interval. Berberine has been shown to exert synergistic effects with fluconazole even in drug-resistant Candida albicans infections. Fluconazole may increase the serum concentration of Erythromycin (Risk X: avoid combination). Pharmacology: Pharmacodynamics Like other imidazole- and triazole-class antifungals, fluconazole inhibits the fungal cytochrome P450 enzyme 14α-demethylase. Mammalian demethylase activity is much less sensitive to fluconazole than fungal demethylase. This inhibition prevents the conversion of lanosterol to ergosterol, an essential component of the fungal cytoplasmic membrane, and subsequent accumulation of 14α-methyl sterols. Fluconazole is primarily fungistatic; however, it may be fungicidal against certain organisms in a dose-dependent manner, specifically Cryptococcus. Pharmacology: Pharmacokinetics Following oral dosing, fluconazole is almost completely absorbed within two hours. Bioavailability is not significantly affected by the absence of stomach acid. Concentrations measured in the urine, tears, and skin are approximately 10 times the plasma concentration, whereas saliva, sputum, and vaginal fluid concentrations are approximately equal to the plasma concentration, following a standard dose range of between 100 mg and 400 mg per day. The elimination half-life of fluconazole follows zero order, and only 10% of elimination is due to metabolism, the remainder being excreted in urine and sweat. Patients with impaired renal function will be at risk of overdose.In a bulk powder form, it appears as a white crystalline powder, and it is very slightly soluble in water and soluble in alcohol. History: Fluconazole was patented by Pfizer in 1981 in the United Kingdom and came into commercial use in 1988. Patent expirations occurred in 2004 and 2005.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rhetorical reason** Rhetorical reason: Rhetorical reason is the faculty of discovering the crux of the matter. It is a characteristic of rhetorical invention (inventio) and it precedes argumentation. Aristotle's definition: Aristotle's definition of rhetoric, "the faculty of observing, in any given case, the available means of persuasion", presupposes a distinction between an art (τέχνη, techne) of speech–making and a cognitively prior faculty of discovery. That is so because, before one argues a case, one must discover what is at issue. How, for example, does one discover available means of persuasion? One does not simply frolic through fertile fields of τόποι (topoi), randomly gathering materials with which to build lines of argument. There is a method endemic to rhetoric which guides the search for those lines of argument that speak most directly to the issue at stake.George A. Kennedy explains the distinction when he writes that the work of rhetoric, in Aristotle's view, is to discover [θεωρησαι (theoresai)] the available means of persuasion" (1.1.1355b25–6). It is thus a theoretical activity and discovers knowledge. This knowledge, which includes words, arguments, and topics, is then used by the orator as the material cause of a speech. There is thus a theoretical art of rhetoric standing behind or above the productive art of speech-making. (1980, p. 63) Inventio (rhetorical invention) then, involves more than a techne; it is also a faculty of discovery (dunamis (δύναμις) to theoresai). Aristotle's definition: The Aristotelian approach to inventio further assumes that reasoning employed in decision-making is a kind of probable reasoning. It assumes that, although the contingencies of nature and of individuals prevent our obtaining certainty about future political and social affairs, we still can use our reason to discover the best course to pursue. Such reasoning applied to human affairs to make decisions about what should be done is rhetorical reasoning issuing in praxis. (Moss 1986, pp. 2, 3) Moral inquiry: Judgments about what should be done in the future are generally matters of shared inquiry and are always contingent (based on probability). Shared inquiry, following Wayne C. Booth, can be understood as "the art of reasoning together about shared concerns" (1988, p. 108). It is shared because the judgment is discursively negotiated with reference to both the crux of the matter and in light of what is in the best interest of oneself or some other. Moral inquiry: Accounting for both Moss and Booth, rhetorical reason may be conceptualized as a method of "shared moral inquiry", but with a special meaning of the word "moral". Moral inquiry, within the present context, means inquiry into practical matters (as opposed to mere speculation or scientific inquiry). Hans-Georg Gadamer uses "moral" in this sense in Truth and Method (p. 314). Albert R. Jonsen and Stephen Toulmin write that "moral knowledge is essentially particular" (1988, p. 330). Shared moral inquiry is moral, not because it involves questions of morality, but because it attempts to determine what is the right thing to do in contingent cases, where such judgments are not made deterministically. Moral inquiry is conducted in the contingent realm, and is concerned with the particular case. Moral inquiry: Understood with precision then, rhetorical reason guides and φρόνησις (phronesis) drives moral inquiry. The aim of moral inquiry is sound moral judgment, but judgment in hard cases is frustrated because the crux of the matter is hedged in by a potentially limitless parade of particulars. Moral inquiry: Rhetorical reason manages particulars by systematically determining the relevance of issues and identifying the στάσις (stasis, which is the most relevant of the relevant issues). Ascribing relevance is an act of phronesis (Tallmon, 2001 & 1995a, b). Hence, rhetorical reason is a modality of phronesis and also, as Aristotle famously notes, a counterpart of dialectic. That is, it depends upon practical wisdom for its proper work, and, in that work, it operates much like dialectical inference, only its proper domain is the particular case as opposed to the general question. Moral inquiry: Hence, viewed as a guide to resolving tough cases, rhetorical reason is constituted by: topical logic (which guides inquiry by managing particulars) stasis (which guides inquiry toward the crux of the matter) sensitivity to maxims (which signal when the inquiry has taken a turn away from the instant case) dialectical inference (which helps clarify the issue at stake), and the entire enterprise is driven by phronesisIndividuals exercise rhetorical reason, but its excellence is realized in the public arena (i.e., in shared inquiry, by referencing pooled wisdom).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Motocross Maniacs** Motocross Maniacs: Motocross Maniacs (モトクロス マニアックス) is a platform racing video game released in 1989 by Konami, for the Game Boy handheld. Summary: The player controls a motorcycle moving one way horizontally, much like Nintendo's Excitebike for the Nintendo Entertainment System. When a level has been completed within its time limit, the player starts on the next. Summary: With only eight different levels and fairly simple gameplay, the game's complexity is not too different from other Game Boy games released around the introduction of the system. Despite this, Motocross Maniacs requires quite some skill to master. Additionally, the game provides replay value by letting the player beat previous time records, which are announced at the completion of a level. The game's cartridge, however, does not retain these records after the Game Boy is turned off. Summary: Two sequels were released, Motocross Maniacs 2 for the Game Boy Color and Motocross Maniacs Advance for the Game Boy Advance. Motocross Maniacs 2 is largely similar to the original, aside from the addition of championship mode where the player competes against computer-controlled racers. Motocross Maniacs Advance, however, did introduce major changes and more features, including revamped graphics, selectable characters, and newly-designed 2.5D levels. Gameplay: The player's control over the motorcycle allows moving the front wheel upwards or downwards, allowing for jumps over obstacles and rotating stunts when in the air. If the player lands incorrectly, the rider will fall off the bike and climb back on it. This is a main cause of time loss in the game. The bike is provided with only limited fuel (represented as "TIME" bar), and the player must complete the course before running out of fuel. The game ends when the time limit is depleted. The bike also has the ability to use nitrous oxide for a short speed boost. This allows for large jumps and can be necessary for making loops. Gameplay: Nitrous power-ups can be obtained within a level (marked as a large "N"), but often they will require one to be used to get them in the first place. Thus when the player has depleted his supply, it can be very hard or even impossible to regain more boosts. Furthermore, some levels have sections that require one or more boosts to pass, making it nearly impossible to complete the level without any. Gameplay: Other power-ups in the game include extra fuel (marked as "T"), increased speed (marked as "S", continues until the player crashes) and enhanced traction (marked as "R", also continues until the player crashes). The player can also perform combos by making frontflips and backflips, and consecutive combos grants the player a flying booster (represented by "JET") that allows the bike to jump up high if several boosts are used. Many levels offer multiple directions for the player. One usually contains extra power-ups and requires the use of boosts and agility to successfully traverse, while the other simply requires jumping obstacles on the ground, often including loose sand and gravel that will slow down the motorcycle. Modes There are eight different levels, which become harder to complete as they progress. Any level can be selected from the beginning of the game, together with an A, B or C difficulty level which determines the time available to complete the level. The game has three different modes of play: Single player: One player completes level after level until their time runs out. Single player versus computer: Same as above, but also requires the player to beat the computer. Single player versus second player: Same as above, but with another human player.For the two versus modes, the second player appears as a silhouette behind the player, similar to time trials in other racing games. Reception: In 2019, PC Magazine included MotoCross Maniacs in their "The 10 Best Game Boy Games" writing: "Simple yet deep gameplay will keep you coming back for more in this timeless motorsports title."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Split networks** Split networks: For a given set of taxa like X, and a set of splits S on X, usually together with a non-negative weighting, which may represent character changes distance, or may also have a more abstract interpretation, if the set of splits S is compatible, then it can be represented by an unrooted phylogenetic tree and each edge in the tree corresponds to exactly one of the splits. More generally, S can always be represented by a split network, which is an unrooted phylogenetic network with the property that every split s in S is represented by an array of parallel edges in the network. Split networks: A split network N can be obtained from a number of different types of data: Split networks from distances Split networks from trees Split networks from sequences Split networks from quartets
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Egophony** Egophony: Egophony (British English, aegophony) is an increased resonance of voice sounds heard when auscultating the lungs, often caused by lung consolidation and fibrosis. It is due to enhanced transmission of high-frequency sound across fluid, such as in abnormal lung tissue, with lower frequencies filtered out. It results in a high-pitched nasal or bleating quality in the affected person's voice. Technique: While listening to the lungs with a stethoscope, the patient is asked to pronounce the modern English (more generally, post-Great Vowel Shift) long-E vowel sound. Interpretation: Stethoscopic auscultation of a clear lung field during this articulation will detect a sound matching that received through normal hearing; that is, the sound articulated by the patient will be clearly transmitted through the lung field and heard unchanged by the clinician. When the lung field is consolidated (filled with liquid or other solid mass such as tumor or fungus ball), the patient's spoken English long E will sound like a "pure-voweled" long E or a modern English long A without the latter's usual offglide. This effect occurs because the solid mass in the lung field will disproportionately dampen the articulated sound's acoustic overtones higher in the harmonic series, transmuting the English long E, in which higher overtones predominate strongly, to a sound (the English long A) in which higher overtones predominate only slightly, i.e., to a markedly lesser degree than in the former sound. This finding is referred to in clinical contexts as the "E to A transition." If associated with fever, shortness of breath, and cough, this E to A transition indicates pneumonia. Related terms and techniques: Somewhat related, bronchophony, a form of pectoriloquy, is a conventional respiratory examination whereby the clinician auscultates the chest while asking the patient to repeat the word "ninety-nine". Better phrases in English include "toy boat”, "Scooby Doo", and “blue balloons". In the UK, regional variation with clinicians from Edinburgh to Glasgow use the phrase "one-one-one" due to its more rounded sound. Related terms and techniques: Similar terms are bronchophony and whispered pectoriloquy. The mechanism is the same; that is, fluid or consolidation causes the sound of the voice to be transmitted loudly to the periphery of the lungs where it is usually not heard. Causes: Pleural effusion, though egophony is only heard above the level of the effusion in an upright patient. Pneumonia (lung consolidation) Fibrosis Etymology: Egophony comes from the Greek word for "goat," (αἴξ aix, aig-) in reference to the bleating quality of the sound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Programming idiom** Programming idiom: In computer programming, a programming idiom or code idiom is a group of code fragments sharing an equivalent semantic role, which recurs frequently across software projects often expressing a special feature of a recurring construct in one or more programming languages or libraries. This definition is rooted in the definition of "idiom" as used in the field of linguistics. Developers recognize programming idioms by associating meaning (semantic role) to one or more syntactical expressions within code snippets (code fragments). The idiom can be seen as an action on a programming concept underlying a pattern in code, which is represented in implementation by contiguous or scattered code fragments. These fragments are available in several programming languages, frameworks or even libraries. Generally speaking, a programming idiom's semantic role is a natural language expression of a simple task, algorithm, or data structure that is not a built-in feature in the programming language being used, or, conversely, the use of an unusual or notable feature that is built into a programming language. Programming idiom: Knowing the idioms associated with a programming language and how to use them is an important part of gaining fluency in that language. It also helps to transfer knowledge in the form of analogies from one language or framework to another. Such idiomatic knowledge is widely used in crowdsourced repositories to help developers overcome programming barriers. Mapping code idioms to idiosyncrasies can be a helpful way to navigate the tradeoffs between generalization and specificity. By identifying common patterns and idioms, developers can create mental models and schemata that help them quickly understand and navigate new code. Furthermore, by mapping these idioms to idiosyncrasies and specific use cases, developers can ensure that they are applying the correct approach and not overgeneralizing it. One way to do this is by creating a reference or documentation that maps common idioms to specific use cases, highlighting where they may need to be adapted or modified to fit a particular project or development team. This can help ensure that developers are working with a shared understanding of best practices and can make informed decisions about when to use established idioms and when to adapt them to fit their specific needs. Programming idiom: A common misconception is to use the adverbial or adjectival form of the term as using a programming language in a typical way, which really refers to a idiosyncrasy. An idiom implies the semantics of some code in a programming language has similarities to other languages or frameworks. For example, an idiosyncratic way to manage dynamic memory in C would be to use the C standard library functions malloc and free, whereas idiomatic refers to manual memory management as recurring semantic role that can be achieved with code fragments malloc in C, or pointer = new type [number_of_elements] in C++. In both cases, the semantics of the code are intelligible to developers familiar with C or C++, once the idiomatic or idiosyncratic rationale is exposed to them. However, while idiomatic rationale is often general to the programming domain, idiosyncratic rationale is frequently tied to specific API terminology. Examples of simple idioms: Printing Hello World One of the most common starting points to learn to program or notice the syntax differences between a known language and a new one.It has several implementations, among them the code fragments for C++: For Java: Inserting an element in an array This idiom helps developers understand how to manipulate collections in a given language, particularly inserting an element x at a position i in a list s and moving the elements to its right.Code fragments: For Python: For JavaScript: For Perl:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metallosis** Metallosis: Metallosis is the medical condition involving deposition and build-up of metal debris in the soft tissues of the body.Metallosis has been known to occur when metallic components in medical implants, specifically joint replacements, abrade against one another. Metallosis has also been observed in some patients either sensitive to the implant or for unknown reasons even in the absence of malpositioned prosthesis. Though rare, metallosis has been observed at an estimated incidence of 5% of metal joint implant patients over the last 40 years. Women may be at slightly higher risk than men. If metallosis occurs, it may involve the hip and knee joints, the shoulder, wrist, elbow joints, or spine. In the spine, the wear debris and resulting inflammatory reaction may result in a mass often referred to as a "metalloma" in medical literature, which may lead to neurological impairment over time. A similar condition has been also described when titanium dental implant degradation occurs leading to inflammatory titanium particle-mediated Peri-implantitis. Titanium particles in the peri-implant tissues do not occur via functional abrasion but are thought to result from damaging hygiene procedures or due to complex electrochemical interactions caused by oral bacteria.The abrasion of metal components may cause metal ions to be solubilized. The hypothesis that the immune system identifies the metal ions as foreign bodies and inflames the area around the debris may be incorrect because of the small size of metal ions may prevent them from becoming haptens. Poisoning from metallosis is rare, but cobaltism is an established health concern. The involvement of the immune system in this putative condition has also been theorized but has never been proven.Purported symptoms of metallosis generally include pain around the site of the implant, pseudotumors (a mass of inflamed cells that resembles a tumor but is actually collected fluids), and a noticeable rash that indicates necrosis. The damaged and inflamed tissue can also contribute to loosening the implant or medical device. Metallosis can cause dislocation of non-cemented implants as the healthy tissue that would normally hold the implant in place is weakened or destroyed. Metallosis has been demonstrated to cause osteolysis.Women, those who are small in stature, and the obese are at greater risk for metallosis because their body structure causes more tension on the implant, quickening the abrasion of the metal components and the subsequent build-up of metallic debris. Physical effects and symptoms: Persons suffering from metallosis can experience any of the following symptoms: Extreme pain (even when not moving); Swelling and inflammation; Loosening of the implant; Joint dislocation; Bone deterioration; Aseptic fibrosis, local necrosis; Hip replacement failure; Metal toxicity from grinding metal components; and Necessary subsequent hip replacement revision or surgeries. Physical effects and symptoms: Complications As the grinding components cause metal flakes to shed from the system, the implant wears down. Metallosis results in numerous additional side effects: Confusion; Feelings of malaise; Gastrointestinal problems; Dizziness; Headaches; Problems in the nervous system (feelings of burning, tingling, or numbness of the extremities); and Cobalt poisoning (skin rashes, cardiomyopathy, problems with hearing, sight or cognition, tremors, and hypothyroidism). DePuy hip replacement recall: In August 2010, DePuy recalled its hip replacement systems ASR XL Acetabular Hip Replacement System and ASR Hip Resurfacing System due to failure rates and side effects including metallosis. The recalls triggered a large number of lawsuits against DePuy and its parent company Johnson & Johnson upon claims that the companies knew about the dangers of the implants before they went on the market in the United States.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sports engineering** Sports engineering: Sports engineering is a sub-discipline of engineering that applies math and science to develop technology, equipment, and other resources as they pertain to sport. Sports engineering was first introduced by Issac Newton’s observation of a tennis ball. In the mid-twentieth century, Howard Head became one of the first engineers to apply engineering principles to improve sports equipment. Starting in 1999, the biannual international conference for sports engineering was established to commemorate achievements in the field. Presently, the journal entitled “Sports Engineering,” details the innovations and research projects that sports engineers are working on.The study of sports engineering requires an understanding of a variety of engineering topics including physics, mechanical engineering, materials science, and biomechanics. Many practitioners hold degrees in those topics rather than in sports engineering specifically. Specific study programs in sports engineering and technology are becoming more common at the graduate level, and also at the undergraduate level in Europe. Sports engineers also employ computational engineering tools like computer-aided design (CAD), computational fluid dynamics (CFD), and finite element analysis (FEA) to design and produce sports equipment, sportswear, and more. History: One of the earliest instances of the application of scientific principles in sports context occurred in 1671 when English mathematician Issac Newton wrote a letter to German theologian and natural philosopher Henry Oldenburg regarding a tennis ball’s flight mechanics. In the following centuries, German scientist Heinrich Gustav Magnus further examined Newton’s analysis and applied Newtonian theories to the spinning properties of balls. Around 1760, in the midst of the Industrial Revolution, sports engineering was further explored with the acceleration of the manufacturing of sports equipment. During this stage, the manufacturers recognized an increase in sales being directly related to better quality of equipment. As a result, experimentation started to explore new designs and materials for enhanced athletic performance.In modern times, sports engineers, such as Howard Head, applied engineering principles to sports equipment. After finding traditional snow skis to be too heavy, Head developed a lighter, more flexible skis in 1947. He used his knowledge from the aircraft industry to create skis with a metal-sandwich construction. After 40 iterations and 3 years, he released his skis commercially, and they soon set the standard for skis. Today, his skis are widely known and recognized under the brand Head, with Head Sportswear International, and the Head Ski Company. Head also developed the Prince Classic tennis racquet. He created a much lighter design, with a bigger frame supporting off-center hits, and a grip that did not twist in players' hands. As with his skis, Head's oversized racquets were embraced by top athletes in the sport.In 1998, the International Sports Engineering Association (ISEA) was established and the journal “Sports Engineering” was published. In 1999, the first international sports engineering conference was organized by Steve Haake called “The International Conference on the Engineering of Sports” in Sheffield, England. The conference brings world-leading researchers, sports professionals, and industry organizations together to celebrate the profession, showcasing innovations in both research and industry. Education: Sports engineering in the United States is often part of universities' undergraduate mechanical engineering programs, rather than as stand-alone bachelor's degree programs. On the graduate level, research labs often use an interdisciplinary approach to sports engineering such as in the MIT Sports Lab and the Biosports Lab at UC Davis. Some graduate opportunities like the program offered through Purdue include concentrations in sports engineering within the mechanical engineering or materials engineering department.Most sports engineering students pursue Bachelor’s degrees in other areas within engineering including mechanical, electrical, and materials engineering; there is no uniform educational path for becoming a sports engineer. Education: Although universities in the United States offer sports engineering courses or concentrations, more extensive degree programs in the subject are more common in the United Kingdom. Sports engineering in academics is more developed in the United Kingdom with programs at the undergraduate and graduate levels. Loughborough University offers a 1 year, full-time sports engineering postgraduate program. Nottingham Trent University offers a 3 year, full-time undergraduate program that is based on industry-oriented seminars and activities as well as on-campus research experiences like the Sports Engineering lab. Education: Curriculum Course offerings in sports engineering synthesize content from both engineering and sports science. Programs in sports engineering encompass engineering-oriented classes such as physics, aerodynamics, and materials science, as well as more sports science-based courses such as biomechanics and anatomy. Education: Computational modeling Computational modeling is commonly employed across many engineering disciplines and is often applied to sports. Computational fluid dynamics (CFD) can be used in sports engineering education to model flow in both air and water systems. Sports engineers can use computational modeling systems to analyze the behavior of an object without having to physically produce them. For example, CFD has been used to predict fluid patterns around a skier jumping through the air or a swimmer moving through the water, to reduce the drag acting on the athlete.FEA or finite element analysis is another engineering modeling tool that applies to the field of sports engineering to simulate the physics of applied forces acting in a system. For example, FEA analysis can be used to analyze the impact of a ball against a tennis racket or different the deformation resulting from the impact of a football. Education: Study programs in sports engineering Undergraduate and graduate level programs in sports engineering are more common in Europe as opposed to the United States. The list below highlights offerings currently available in the field of sports engineering. Aalborg University (Denmark) Centre for Sports Engineering Research (CSER) - Sheffield Hallam University (UK) Deutsche Sporthochschule Köln (Germany) Griffith University (Australia) Islamic Azad University, Science and Research Branch (Iran) (undergraduate & postgraduate student) Loughborough University (UK) Massachusetts Institute of Technology Mittuniversitetet (Sweden) Purdue University TU Chemnitz (Germany) (undergraduate) TU Chemnitz (Germany) (graduate) TU Delft (Netherlands) University of Adelaide (Australia) University of Applied Sciences Technikum Wien (Austria)(undergraduate) University of Applied Sciences Technikum Wien (Austria)(graduate) University of California Davis University of Debrecen, Faculty of Engineering (Hungary) University of Otago (New Zealand) Applications and research: Sports engineering has a variety of applications across the sports industry. Some examples of these applications and related technologies are listed below. Applications and research: Sports equipment Computer-aided design (CAD) and finite element analysis (FEA) can be used to design and test sports equipment. Engineers can use FEA to apply different stresses to an object and determine its strengths and weaknesses. For example, FEA can be used to model a tennis racket hitting the ball, including how the racket and ball might deform or vibrate as a result of the strike. Computational Fluid Dynamics (CFD) can be applied to sports such as cycling to examine the aerodynamics of cycles and riders' body positions. This information is useful in understanding how to increase cycling speeds and decrease exertion for riders. Applications and research: Sportswear One notable example of how engineering intersects with sportswear is Speedo’s LZR Racer, a swimsuit made in collaboration with NASA researchers and engineers. Sports engineers tested different materials and coatings in a wind tunnel to determine how to reduce drag. Engineers also optimized stability and mobility by using layering and welding techniques specific to particular body parts. For instance, the abdomen and lower back areas of the suit were made tighter to improve core stability. The LZR Racer was able to reduce skin friction drag by 24% compared to Speedo’s previously most advanced suit. These engineering applications helped swimmers who wore Speedo’s LZR Racer to set 93 world records. Related disciplines: Materials science, mechanical engineering, sports science, sports medicine, biomechanics, and physics are some fields that overlap with sports engineering.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ambisonic reproduction systems** Ambisonic reproduction systems: The design of speaker systems for Ambisonic playback is governed by several constraints: the desired spatial operating range (horizontal-only, hemispherical, full-sphere), the predominant resolution (= Ambisonic order) of the expected program material, the desired localisation performance and size of listening area versus the available number of speakers and amplification channels, and the theoretically optimal distribution of speakers versus the actually available placement and/or rigging options.This page attempts to discuss the interaction of these constraints and their various trade-offs in theory and practice, as well as perceptional advantages or drawbacks of specific speaker layouts which have been observed in actual deployments. General considerations: Near-field effect In its original formulation, Ambisonics assumed plane-wave sources for reproduction, which implies speakers that are infinitely far away. This assumption will lead to a pronounced bass boost for speaker rigs of small diameter, which increases with Ambisonic order. The cause is the very same proximity effect that occurs with directional microphones. Therefore, appropriate near-field compensation (bass equalisation) is beneficial. General considerations: Speaker distance vs. angles This same plane-wave assumption makes it possible to vary the distance of speakers within reasonable limits without upsetting the correct function of the decoder, provided that the difference is compensated with delay, the power is adjusted for uniform loudness at the center, and that per-speaker near-field compensation is used. Distance does not affect the decoder matrix. General considerations: Variable speaker distance is therefore the most important degree of freedom when deploying idealized layouts in actual rooms. It is constrained by the reverberation of the room which leads to uneven direct-to-reverb ratios between speakers at different distances, and the power handling capability of the most distant speaker. If speakers have to be moved very close, care must be taken to ensure they still cover the entire listening area with reasonably flat frequency response. General considerations: Speaker angles on the other hand should be adhered to as precisely as possible, unless an optimised irregular decoder can be generated in-the-field. Horizontal vs. full-sphere accuracy For horizontal-only content, horizontal systems provide more stable localisation at high frequencies than full-sphere ones, as shown by a simulation of the energy vector rE→ . Therefore, if occasional horizontal-only reproduction at the highest precision is desired, full-sphere layouts with a dense horizontal ring are preferable. General considerations: Phasing Since multiple speakers will inevitably radiate very highly correlated content, a moving listener may experience a phasing effect that affects the perceived timbre and can upset localisation. Phasing artefacts are most prominent in dry rooms on very precisely calibrated systems. They can be reduced by adding height speakers, which tend to smoothen the effect, or tuned to a subjective minimum by introducing staggered delays to the speakers, with the understanding that this may adversely affect low-frequency localisation if overdone. General considerations: Phasing problems usually become evident in walk-around environments, and are of less concern for a seated audience, unless the interference pattern is so dense that it is perceived by small head movements. General considerations: Loudspeaker occlusion For multi-listener environments and auditoria, the occlusion of speakers by other listeners must not be under-estimated. Generally, the higher the order and the more physically accurate the reproduction, the more robust it is, up to the point where occlusion produces realistic effects that are consistent with the affected listener's visual perception. For low order systems however, reconstruction can easily fail entirely when line-of-sight to speakers is blocked, which has led to odd seating arrangements in listening tests.With-height systems usually provide more unhindered lines-of-sight per direction for a given audience, which might increase their robustness. General considerations: Number of loudspeakers vs. source material resolution Solvang and others have shown that using much more than the minimally required number of speakers can be detrimental. The reason is simple: more speakers with constant angular resolution means higher crosstalk and thus higher correlation between speakers. If not managed, this leads to stronger comb-filtering effect and phasing artefacts when the listener moves. General considerations: Therefore, with some decoding techniques, it may be advisable to consider if and how a reasonably regular lower-order decoder that omits some speakers can be fitted into any higher-order system design. For example, the third-order octagon allows for a perfectly regular first-order square using only every other speaker. Horizontal-only systems: Horizontal-only playback rigs are the most commonly deployed and most extensively researched Ambisonic speaker systems, because they constitute an economic next step after conventional stereo. They can reproduce full-sphere content, but elevated sources will be projected onto the horizontal plane, and sources at zenith and nadir will be reproduced in mono by all available speakers. Horizontal-only systems: The literature is rife with horizontal decoders based on the simpler cylindrical harmonics, which do not depend on the elevation angle ϕ . Their use is discouraged, because they wrongly assume cylindrical waves which would require perfect line sources for reproduction. Actual speakers are point sources and will inevitably leak energy along the vertical axis, which has consequences for near-field compensation and the tuning of dual-band decoders. Hence, cylindrical decoders do not usually fulfill the Ambisonic criteria. Horizontal-only systems: Triangle The theoretical minimum of speakers for horizontal playback is 2ℓ+1 , or the number of Ambisonic components. However, the triangle demonstrates that at least one more speaker is necessary for proper soundfield reconstruction, since it exhibits extreme speaker detention: when panned around, sounds will stick to speaker locations and then jump across to the next speaker, rather than showing uniform motion. As a consequence, the directions of rV→ and rE→ do not match between speakers, which causes localisation errors.Hence, the triangle is a suitable setup for Ambisonic reproduction only at low frequencies. Horizontal-only systems: Square or rectangular setups Four-speaker setups are the most economical way of reproducing first-order horizontal material, and a rectangular layout is most easily fit into a living room, which makes these setups the most common in domestic environments. With rectangles, there is a localisation performance trade-off: the short sides will localise more stably than the square, the long sides worse. Consequently, for predominantly frontal sound stages, Benjamin, Lee, and Heller (2008) have observed a preference for rectangular layouts over squares.All legacy domestic hardware decoders supported rectangular layouts, usually with variable aspect ratios. Horizontal-only systems: ITU 5.1 It is tempting to consider 5.1 systems for Ambisonic playback due to their wide availability, but the ITU-R BS775 layout is quite hostile to Ambisonics due to its extreme irregularity. The three front speakers are so close together (-30°, 0°, +30°) that they will exhibit significant crosstalk in first-order, which causes irritating phasing artefacts without any benefit. Therefore, it is advisable to omit the center speaker and decode only for L, R, Ls and Rs, as has been done in all pre-decoded G-format releases for 5.1. These G-format disks also assume a rectangular layout. If first-order playback is desired, the rear speakers should be moved accordingly, otherwise the Ambisonic imaging will be very unstable due to the wide angle between the surround speakers. Horizontal-only systems: Decoding approaches to 5.1 were first suggested by Gerzon and Barton in 1992 and subsequently patented. Adriansen provides a free second-order decoder obtained by genetic search, and Wiggins (2007) has shown that source material as high as fourth order can be beneficial in order to 'steer' the decoding functions, even though the system is unable to reproduce the full spatial resolution.Second and third-order material can be played satisfactorily over the ITU 5.1 layout, but due to the problems with first-order reproduction, it should not be considered for Ambisonics except as a compromise when 5.1 content predominates. Horizontal-only systems: Hexagon If six speakers and sufficient space are available, the hexagon is a very good option that has outperformed four-channel setups for first-order reproduction in listening tests and is capable of second-order reproduction. It can be driven by an inexpensive 5.1 sound card and domestic 5.1 amplifier, provided the LFE output is full-range. Horizontal-only systems: When used with one speaker in front, the hexagon can be abused for native 5.1 playback at the expense of a significantly wider and more blurry stereo stage (120° as opposed to 60° between L and R as per ITU-R BS775). Alternatively, reasonably sharp virtual speakers at the canonical ITU locations can be created with second-order panners - this is an interesting option if a phantom center is tolerable, and it will also work with a two-in-front orientation, which leaves more room for a TV or projection screen. Horizontal-only systems: Octagon The Octagon is a flexible choice for up to third-order playback. When oriented one-in-front, it can be used for reasonably accurate native 5.1 playback (L and R at +/- 45° vs. 30°, and surrounds within the standardized sector at +/- 112.5°). For first order, phasing artefacts might become obvious under non-reverberant listening conditions due to the use of significantly more speakers than required, and Solvang's results (2008) suggest slightly increased timbral defects outside the sweet spot.With eight channels, an octagon can be driven by affordable 7.1 consumer equipment, again as long as the LFE output is full-range. Horizontal-only systems: Driven in third order, it is a reasonable lower bound for concert sound reinforcement over an extended listening area, either for native Ambisonic content or to produce virtual speakers, which has been found to scale to several hundred listeners under favourable conditions. Systems with limited height reproduction: Stacked rings Stacked rings have been a popular way of obtaining limited with-height reproduction. Spatial resolution will be weak close to the zenith and nadir, but these are somewhat rare positions for sound sources. Rings are generally easier to rig than (hemi-)spherical setups because they do not require overhead trussing, speaker stands can be shared unless the rings are twisted, and entrances, fire escape routes etc. can be more easily accommodated for. Double hexagons and octagons are the most common variations. Since the introduction of #H#V mixed-order schemes by Travis (2009), stacked rings can be operated at their full horizontal resolution even for elevated sources. #H#V decoding matrices for common layouts are available from Adriaensen (2012).Triple rings are rare, but have been used to good effect. Systems with limited height reproduction: Upper hemisphere systems Since stacked rings are somewhat wasteful at higher elevations and necessarily have a hole at the zenith, they have been largely surpassed by hemispherical layouts since mature methods for decoder generation have become available. As they are difficult to rig and require overhead points, hemispheres are usually found either in permanent installations or experimental studios, where expensive and visually intrusive trussing is not an issue. Full-sphere systems: Platonic solids: The regular Platonic solids are the only full-sphere layouts for which closed-form solutions for decoding matrices exist. Before the development and adoption of modern mathematical tools for the optimisation of irregular layouts and the generation of T-designs and Lebedev grids with higher numbers of speakers, the regular polyhedra were the only tractable options. Tetrahedron Tetrahedral speaker setups were used in the 1970s for first trials of full-sphere sound reproduction. One such experiment conducted by the Oxford University Tape Recording Society was documented by Michael Gerzon in 1971. In this setup, the tetrahedron was inscribed into a cuboid, using every other corner. Despite Gerzon's somewhat over-enthusiastic description (which pre-dates the introduction of Ambisonics and the proper formulation of its psychoacoustic criteria), the tetrahedron exhibits the same stability problems in 3D that plague the triangle for horizontal-only reproduction. It is a viable option for adequate full-sphere reproduction only at low frequencies. Octahedron The octahedron is difficult to set up in "upright" orientation, since the listener would occlude the floor speaker. Hence, a "slanted" setup is usually preferred. It provides basic full-sphere first-order reproduction for a single listener. Goodwin (2009) has suggested a slanted octahedron with separate front center (which he calls 3D7.1) as an alternative way of using 7.1 systems to achieve with-height Ambisonic reproduction in games, and to allow reasonably accurate native 5.1 playback. An OpenAL game audio backend and decoder for this setup is commercially available. Cube The most commonly encountered full-sphere systems are cubes or rectangular cuboids. The same localisation trade-offs apply as for square vs. rectangle (see above). Cuboids are easily fit into standard rooms and provide precise localisation in first order for a single listener plus enjoyable envelopment for one or two more, and they can be built using off-the shelf 7.1 components. If all speakers are placed in room corners, their acoustic loading and resulting bass boost will be uniform, which means they can all be equalised in the same way. Icosahedron For the sake of consistency, we consider the vertices of the regular polyhedra as speaker positions, which makes the twelve-vertex icosahedron the next in the list. If suitable rigging options are available, it is capable of second-order full-sphere reproduction. A good and slightly more practical alternative is a horizontal hexagon complemented by two twisted triangles on floor and ceiling. Dodecahedron With twenty vertices, the dodecahedron is capable of third-order full-sphere playback. Budget dodecahedra can be built by combining four domestic 5.1 sets as demonstrated at IRCAM's Studio 4, which would also allow for a square horizontal subwoofer decode, Irregular speaker layouts: It is possible to decode Ambisonics and Higher-Order Ambisonics onto fairly arbitrary speaker arrays, and this is a subject of ongoing research. A number of free decoding toolkits as well as a commercial implementation are available. Binaural stereo: Higher-Order Ambisonics can be decoded to produce 3D stereo headphone output similar to that produced using binaural recording. This can be done in a number of ways, including the use of virtual loudspeakers in combination with HRTF data. Other methods are possible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cryptographic primitive** Cryptographic primitive: Cryptographic primitives are well-established, low-level cryptographic algorithms that are frequently used to build cryptographic protocols for computer security systems. These routines include, but are not limited to, one-way hash functions and encryption functions. Rationale: When creating cryptographic systems, designers use cryptographic primitives as their most basic building blocks. Because of this, cryptographic primitives are designed to do one very specific task in a precisely defined and highly reliable fashion. Rationale: Since cryptographic primitives are used as building blocks, they must be very reliable, i.e. perform according to their specification. For example, if an encryption routine claims to be only breakable with X number of computer operations, and it is broken with significantly fewer than X operations, then that cryptographic primitive has failed. If a cryptographic primitive is found to fail, almost every protocol that uses it becomes vulnerable. Since creating cryptographic routines is very hard, and testing them to be reliable takes a long time, it is essentially never sensible (nor secure) to design a new cryptographic primitive to suit the needs of a new cryptographic system. The reasons include: The designer might not be competent in the mathematical and practical considerations involved in cryptographic primitives. Rationale: Designing a new cryptographic primitive is very time-consuming and very error-prone, even for experts in the field. Rationale: Since algorithms in this field are not only required to be designed well but also need to be tested well by the cryptologist community, even if a cryptographic routine looks good from a design point of view it might still contain errors. Successfully withstanding such scrutiny gives some confidence (in fact, so far, the only confidence) that the algorithm is indeed secure enough to use; security proofs for cryptographic primitives are generally not available.Cryptographic primitives are similar in some ways to programming languages. A computer programmer rarely invents a new programming language while writing a new program; instead, they will use one of the already established programming languages to program in. Rationale: Cryptographic primitives are one of the building blocks of every crypto system, e.g., TLS, SSL, SSH, etc. Crypto system designers, not being in a position to definitively prove their security, must take the primitives they use as secure. Choosing the best primitive available for use in a protocol usually provides the best available security. However, compositional weaknesses are possible in any crypto system and it is the responsibility of the designer(s) to avoid them. Combining cryptographic primitives: Cryptographic primitives, on their own, are quite limited. They cannot be considered, properly, to be a cryptographic system. For instance, a bare encryption algorithm will provide no authentication mechanism, nor any explicit message integrity checking. Only when combined in security protocols, can more than one security requirement be addressed. For example, to transmit a message that is not only encoded but also protected from tinkering (i.e. it is confidential and integrity-protected), an encoding routine, such as DES, and a hash-routine such as SHA-1 can be used in combination. If the attacker does not know the encryption key, they can not modify the message such that message digest value(s) would be valid. Combining cryptographic primitives: Combining cryptographic primitives to make a security protocol is itself an entire specialization. Most exploitable errors (i.e., insecurities in crypto systems) are due not to design errors in the primitives (assuming always that they were chosen with care), but to the way they are used, i.e. bad protocol design and buggy or not careful enough implementation. Mathematical analysis of protocols is, at the time of this writing, not mature. There are some basic properties that can be verified with automated methods, such as BAN logic. There are even methods for full verification (e.g. the SPI calculus) but they are extremely cumbersome and cannot be automated. Protocol design is an art requiring deep knowledge and much practice; even then mistakes are common. An illustrative example, for a real system, can be seen on the OpenSSL vulnerability news page here. Commonly used primitives: One-way hash function, sometimes also called as one-way compression function—compute a reduced hash value for a message (e.g., SHA-256) Symmetric key cryptography—compute a ciphertext decodable with the same key used to encode (e.g., AES) Public-key cryptography—compute a ciphertext decodable with a different key used to encode (e.g., RSA) Digital signatures—confirm the author of a message Mix network—pool communications from many users to anonymize what came from whom Private information retrieval—get database information without server knowing which item was requested Commitment scheme—allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal it later Cryptographically secure pseudorandom number generator
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CST4** CST4: Cystatin-S is a protein that in humans is encoded by the CST4 gene.The cystatin superfamily encompasses proteins that contain multiple cystatin-like sequences. Some of the members are active cysteine protease inhibitors, while others have lost or perhaps never acquired this inhibitory activity. There are three inhibitory families in the superfamily, including the type 1 cystatins (stefins), type 2 cystatins and the kininogens. The type 2 cystatin proteins are a class of cysteine proteinase inhibitors found in a variety of human fluids and secretions. The cystatin locus on chromosome 20 contains the majority of the type 2 cystatin genes and pseudogenes. This gene is located in the cystatin locus and encodes a type 2 salivary cysteine peptidase inhibitor. The protein is an S-type cystatin, based on its high level of expression in saliva, tears and seminal plasma. The specific role in these fluids is unclear but antibacterial and antiviral activity is present, consistent with a protective function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bridge (music)** Bridge (music): In music, especially Western popular music, a bridge is a contrasting section that prepares for the return of the original material section. In a piece in which the original material or melody is referred to as the "A" section, the bridge may be the third eight-bar phrase in a thirty-two-bar form (the B in AABA), or may be used more loosely in verse-chorus form, or, in a compound AABA form, used as a contrast to a full AABA section. Bridge (music): The bridge is often used to contrast with and prepare for the return of the verse and the chorus. "The b section of the popular song chorus is often called the bridge or release." Etymology: The term comes from a German word for bridge, Steg, used by the Meistersingers of the 15th to the 18th century to describe a transitional section in medieval bar form. The German term became widely known in 1920s Germany through musicologist Alfred Lorenz and his exhaustive studies of Richard Wagner's adaptations of bar form in his popular 19th-century neo-medieval operas. The term entered the English lexicon in the 1930s—translated as bridge—via composers fleeing Nazi Germany who, working in Hollywood and on Broadway, used the term to describe similar transitional sections in the American popular music they were writing. In classical music: Bridges are also common in classical music, and are known as a specific Sequence form—also known as transitions. Formally called a bridge-passage, they delineate separate sections of an extended work, or smooth what would otherwise be an abrupt modulation, such as the transition between the two themes of a sonata form. In the latter context, this transition between two musical subjects is often referred to as the "transition theme"; indeed, in later Romantic symphonies such as Dvořák's New World Symphony or César Franck's Symphony in D minor, the transition theme becomes almost a third subject in itself.The latter work also provides several good examples of a short bridge to smooth a modulation. Instead of simply repeating the whole exposition in the original key, as would be done in a symphony of the classical period, Franck repeats the first subject a minor third higher in F minor. A two-bar bridge achieves this transition with Franck's characteristic combination of enharmonic and chromatic modulation. After the repeat of the first subject, another bridge of four bars leads into the transition theme in F major, the key of the true second subject. In classical music: In a fugue, a bridge is [A] short passage at the end of the first entrance of the answer and the beginning of the second entrance of the subject. Its purpose is to modulate back to the tonic key (subject) from the answer (which is in the dominant key). Not all fugues include a bridge. In classical music: An example of a bridge-passage that separates two sections of a more loosely organized work occurs in George Gershwin's An American in Paris. As Deems Taylor described it in the program notes for the first performance: Having safely eluded the taxis ... the American's itinerary becomes somewhat obscured. ... However, since what immediately ensues is technically known as a bridge-passage, one is reasonably justified in assuming that the Gershwin pen ... has perpetrated a musical pun and that ... our American has crossed the Seine, and is somewhere on the Left Bank.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Live Clipboard** Live Clipboard: Live Clipboard is an extensible data format and set of UI technologies used to support copy/paste operations between web applications in browsers, and between web and desktop applications. Unlike the typical copy/paste experience in browsers, the Live Clipboard mechanism never needs to display a security dialog to the end user, thus delivering a more streamlined user experience. Live Clipboard is licensed under the Creative Commons Attribution-ShareAlike License (version 2.5). As of late 2009, the updated specification, Javascript files and sample code can be found here: DHTML technical introduction: The Live Clipboard DHTML provides copy/paste functionality for data associated with a web page using the Live Clipboard XML data format. It consists of the following components: UI elements for displaying the Live Clipboard icon Javascript objects representing the Live Clipboard object model Javascript that handles serialization and de-serialization of the Live Clipboard XML data Javascript callback function registration for retrieving data for copy and pushing data for paste.It is designed to use standard Javascript and CSS techniques to “bring the clipboard to the web” and to work in as many browsers as possible. Currently, it is verified to work in IE 8 and in Mozilla Firefox 3.5.2. The control does not depend on installation of any client-side applications or browser plug-ins, and it never gains access to the contents of the clipboard without explicit user action. DHTML technical introduction: How it works The control positions a transparent (opacity = 0) input element in a containing div element with a background .png image of the clipboard icon. When the user gives focus to the input by left- or right-clicking it, tabbing etc. the control script gets the data that should be copied by calling the OnGetLiveClipboardData function. This callback function is implemented by the page developer and returns an instance of LiveClipboardClass containing the data that should be copied to the clipboard. Next, the control script serializes this data to the Live Clipboard XML format, which it sets as the value of the input element and selects. DHTML technical introduction: At this point, if the user issues a "copy" command via the context menu, browser edit menu, ctrl-C command etc., the selected contents of the input are put on the clipboard. Alternately, if the user issues a "paste" command, the value of the input is replaced with the current data on the clipboard. In this case, the control script detects that the input value has changed, de-serializes the value from Live Clipboard XML format to an instance of LiveClipboardClass, and passes the object to the OnHandleLiveClipboardData function. DHTML technical introduction: The paste callback function is implemented by the page developer and responds to the pasted data as desired. Specifically, it might iterate through the present data formats, apply data in any recognized format(s) to the page, make an asynchronous call to the web server to persist state, set up a new feed subscription, etc. It is also valid to do nothing, such as when none of the formats in the pasted data are valid for the associated data. DHTML technical introduction: There are mechanisms to use keyboard events to trigger copy/paste. This is accomplished by calling the InitiateKeyboardCopyToLiveClipboard and InitiateKeyboardPasteFromLiveClipboard functions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Applied Radiation and Isotopes** Applied Radiation and Isotopes: Applied Radiation and Isotopes is a peer-reviewed scientific journal published by Elsevier. It was established in 1993 and its scope covers applications of ionizing radiation and radionuclides.The current editors-in-chief are Richard P. Hugtenburg (Swansea University) and Denis Bergeron (National Institute of Standards and Technology). Abstracting and indexing: Applied Radiation and Isotopes is indexed in: Chemical Abstracts Service - CASSI PubMed Web of ScienceIt has a 2020 impact factor of 1.513, according to the Journal Citation Reports. Former titles history: The history of the journal is as follows: The International Journal of Applied Radiation and Isotopes (1956-1985) International Journal of Radiation Applications and Instrumentation. Part A. Applied Radiation and Isotopes (1986-1992) Applied Radiation and Isotopes (1992–present)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Living history** Living history: Living history is an activity that incorporates historical tools, activities and dress into an interactive presentation that seeks to give observers and participants a sense of stepping back in time. Although it does not necessarily seek to reenact a specific event in history, living history is similar to, and sometimes incorporates, historical reenactment. Living history is an educational medium used by living history museums, historic sites, heritage interpreters, schools and historical reenactment groups to educate the public or their own members in particular areas of history, such as clothing styles, pastimes and handicrafts, or to simply convey a sense of the everyday life of a certain period in history. Background: Living history's approach to gain authenticity is less about replaying a certain event according to a planned script as in other reenactment fields. It is more about an immersion of players in a certain era, to catch, in the sense of Walter Benjamin the 'spiritual message expressed in every monument's and every site's own "trace" and "aura"', even in the Age of Mechanical Reproduction.An early example of the spiritual and futuristic side of living history can be found in Guido von List's book Der Wiederaufbau von Carnuntum (1900), which suggested rebuilding the Roman Carnuntum military camp in Vienna's neighborhood as a sort of amusement park (compare Westworld). List, himself a right-wing neopagan, asked his staff of landlords, waiters and rangers to be dressed in historical gear. He also asked to have any visitors re-dressed in costumes and described rituals to signify "in-game" and "out-game" status to enhance the immersion experience. E.g. the role of the garment is of interest till today.The term 'living history' describes the performance of bringing history to life for the general public in a rather freewheeling manner. The players are less confined in their actions, but often have to stay at a certain place or building. Historical presentation includes a continuum from well-researched attempts to recreate a known historical event for educational purposes, through representations with theatrical elements, to competitive events for purposes of entertainment. The line between amateur and professional presentations at living history museums can be blurred, same as the border to live action role-playing games. Background: While the pros latter routinely use museum professionals and trained interpreters to help convey the story of history to the public, some museums and historic sites employ living history groups with high standards of authenticity for the same role at special events. Such events do not necessarily include a mock battle but aim at portraying the life, and more importantly the lifestyle, of people of the period. This often includes both military and civilian impressions. Occasionally, storytelling or acting sketches take place to involve or explain the everyday life or military activity to the viewing public. More common are craft and cooking demonstrations, song and leisure activities, and lectures. Combat training or duels can also be encountered even when larger combat demonstrations are not present. Background: In the United States, on the National Park Service land, NPS policy "does not allow for battle reenactments (simulated combat with opposing lines and casualties) on NPS property." There are exceptions, such as Saylors Creek, Gettysburg. These are highly controlled with exacting safety standards, as well as exacting historical truths. Background: In Germany, medieval reenactment is usually associated with living history and renaissance fairs and festivals, which are found in many cities. One such example is the Peter and Paul festival in Bretten. the Landshut Wedding or the Schloss Kaltenberg knights tournament. The majority of combat reenactment groups are battlefield reenactment groups, some of which have become isolated due to a strong focus on authenticity. Background: Events with the professional reenactment group Ulfhednar lead to a controversy in German archaeology. The German Polish living history group was supported by large museums and scholars, and since 2000 has largely coined the image of early history in Germany and worldwide. Among others, a paper with the programmatic title Under the crocheted Swastika, Germanic Living History and rightwing affects started the dispute in 2009. On the other hand, Communist Eastern Germans had problems with accepting "Indianistic" living history reenactors, a widespread variety in Eastern Germany that were closely monitored by security forces. Background: That sort of 'second-hand' living history is also part of western German folklore and attempts a high level of authenticity. Activities: Activities may be confined to wearing period dress and explaining relevant historical information, either in role (also called first-person interpretation) or out of character (also called third-person interpretation). While many museums allow their staff to move in and out of character to better answer visitor questions, some encourage their staff to stay in role at all times.Living history portrayal often involves demonstrating everyday activities such as cooking, cleaning, medical care, or particular skills and handicrafts. Depending on the historical period portrayed, these might include spinning, sewing, loom weaving, tablet weaving, inkle weaving or tapestry weaving, cloth dyeing, basket weaving, rope making, leather-working, shoemaking, metalworking, glassblowing, woodworking or other crafts. Considerable research is often applied to identifying authentic techniques and often recreating replica tools and equipment. Presentation: Historical reenactment groups often attempt to organize such displays in an encampment or display area at an event, and have a separate area for combat reenactment activities. While some such exhibits may be conducted in character as a representation of typical everyday life, others are specifically organized to inform the public and so might include an emphasis on handicrafts or other day-to-day activities, which are convenient to stage and interesting to watch, and may be explained out of character. During the 1990s, reenactment groups, primarily American Civil War groups, began to show interest in this style of interpretation and began using it at their reenactments. In education: As David Thelen has written, many Americans use the past in their daily lives, while simultaneously viewing the place where they often encounter history – the school – with varying levels of distrust and disconnectedness. Living history can be a tool used to bridge the gap between school and daily life to educate people on historical topics. Living history is not solely an objective retelling of historical facts. Its importance lies more in presenting visitors with a sense of a way of life, than in recreating exact events, accurate in every detail. In education: Many factors contribute to creating a setting in which visitors to living history sites can become active participants in their historical education. Two of the most important are the material culture and the interpreters. Material culture both grounds the audience in the time and place being portrayed, and provides a jumping-off point for conversation. "Interpreters" are the individuals who embody historical figures at living history sites. It is their responsibility to take the historical research that has been done on the sites and decide what meaning it has. These meanings are often a melding of fact and folklore. In education: Folklore is an important aspect of living histories because it provides stories which visitors relate to. Whether it is an interpreter embodying a past individual's personal story or discussing a superstition of the time, these accounts allow the audience to see these past figures not as names on a page, but as actual people. However, folklore is also more than stories. Objects, such as dolls or handmade clothing among others, are considered "folk artifacts," which are grouped under the heading of "material culture."Individuals can participate in living histories as a type of experiential learning in which they make discoveries firsthand, rather than reading about the experience of others. Living history can also be used to supplement and extend formal education. Collaborations between professional historians who work at living history sites and teachers can lead to greater enthusiasm about studying history at all grade levels. Many living history sites profess a dedication to education within their mission statements. For instance, the motto of Colonial Williamsburg, "That the Future May Learn from the Past," proclaims the site's commitment to public edification, as does the portion of the website created for the sole purpose of aiding teachers in instruction on the village.Certain educators, such as James Percoco in his Springfield, Virginia high school class, have chosen to integrate public history into their curricula. Since 1991, Percoco has led a class entitled "Applied History," in which his students have contributed over 20,000 hours of service to various public history institutions. Formal education can help visitors interpret what they see at living history sites. By providing a structured way of looking at living histories, as well as questions to think about during visits, formal education can enrich the experience, just as living histories can enrich learning in the classroom. In education: Some museums such as Middelaldercentret in Denmark or the Netherlands Open Air Museum in the Netherlands provide living history for school children as a part of their education.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Seismic data acquisition** Seismic data acquisition: Seismic data acquisition is the first of the three distinct stages of seismic exploration, the other two being seismic data processing and seismic interpretation. Seismic acquisition requires the use of a seismic source at specified locations for a seismic survey, and the energy that travels within the subsurface as seismic waves generated by the source gets recorded at specified locations on the surface by what is known as receivers (geophones or hydrophones). Before seismic data can be acquired, a seismic survey needs to be planned, a process which is commonly referred to as the survey design. This process involves the planning regarding the various survey parameters used, e.g. source type, receiver type, source spacing, receiver spacing, number of source shots, number of receivers in a receiver array (i.e. group of receivers), number of receiver channels in a receiver spread, sampling rate, record length (the specified time for which the receiver actively records the seismic signal) etc. With the designed survey, seismic data can be recorded in the form of seismic traces, also known as seismograms, which directly represent the "response of the elastic wavefield to velocity and density contrasts across interfaces of layers of rock or sediments as energy travels from a source through the subsurface to a receiver or receiver array." Survey parameters: Source types for land acquisition For land acquisition, different types of sources may be used depending on the acquisition settings. Explosive sources such as dynamite are the preferred seismic sources in rough terrains, in areas with high topographic variability or in environmentally sensitive areas e.g. marshes, farming fields, mountainous regions etc. Such type of sources needs to be buried (coupled) into the ground in order to maximize the amount of seismic energy transferred into the subsurface as well as to minimize safety hazards during its detonation. An advantage of explosive sources is that the seismic signal (known as the seismic wavelet) is minimum phase i.e. most of the wavelet's energy is focused at its onset and therefore during seismic processing, the wavelet has an inverse that is stable and causal and hence can be used in attempts to remove (deconvolve) the original wavelet. A significant disadvantage of using explosive sources is that the source/seismic wavelet is not exactly known and reproducible and therefore the vertical stacking of seismograms or traces from these individual shots can lead to sub-optimal results (i.e. the signal-to-noise ratio is not as high as desired). Additionally, the seismic wavelet cannot be precisely removed to yield spikes or impulses (the ideal aim is the dirac delta function) corresponding to reflections on seismograms. A factor that contributes to the varying nature of the seismic wavelets corresponding to explosive sources is the fact that with each explosion at the prescribed locations, the subsurface's physical properties near the source get altered; this consequently results in changes in the seismic wavelet as it passes by these regions. Survey parameters: Vibratory sources (also known as Vibroseis) are the most commonly used seismic sources in the oil and gas industry. An aspect that sets this type of source apart from explosives or other sources is that it offers direct control over the seismic signal transmitted into the subsurface i.e. energy can be transmitted into the subsurface over a known range of frequencies over a specified period of time. Vibratory sources typically host trucks that are mounted with heavy plates which repeatedly hit the ground to transmit seismic energy to the subsurface. The figure on the right shows one such Vibroseis, known as the Nomad 90. Vibratory sources are often employed where vast areas need to be explored and where the acquisition region does not feature densely populated or densely vegetated areas; highly varying topography also inhibits the employment of vibratory sources. Additionally, wet regions are also suboptimal for vibratory source use since these trucks are extremely heavy and hence tend to damage property in wet terrains. Weight Drop sources, such as the hammer source, are simpler seismic sources that are typically employed for near-surface seismic refraction surveys. This type of source often only involves a weight source (e.g. hammer) and a plate (alongside a trigger to initiate recording on receivers) and hence is logistically feasible at most locations. Its usage mainly being in the near-surface surveys is associated with the smaller amplitudes generated and hence smaller penetration depths compared to vibratory and explosive sources. As in the case of explosive sources, weight drop sources also utilize an unknown source wavelet which offers difficulty in optimal vertical stacking and deconvolution. Survey parameters: Source types for marine acquisition Air-gun is the most commonly used seismic source in marine seismic acquisition since the 1970s. The air-gun is a chamber that is filled with highly pressurized, compressed air which is rapidly released into the water to generate an acoustic pulse (signal). The factors contributing to its common use include the fact that the pulses generated are predictable, controllable and hence repeatable. Additionally, it uses air to generate the source which is readily available and free of cost. Lastly, it also has a relatively smaller environmental impact for marine life compared to other marine seismic sources; an aspect that deters the use of vibratory sources for marine acquisition. Air-guns are typically used in groups or arrays (i.e. multiple air-guns of different volumes) to maximise the signal-to-noise ratio and to minimise the appearance of bubble pulses or oscillations on the traces. Survey parameters: Receiver type Hydrophone A hydrophone is a seismic receiver that is typically used in marine seismic acquisition, and it is sensitive to changes in pressure caused by acoustic pulses in its surrounding environment. Typical hydrophones utilise piezoelectric transducers that, when subjected to changes in pressure, produce an electric potential which is directly indicative of pressure changes. As is the case with air-guns, hydrophones are often also employed in groups or arrays which consist of multiple hydrophones wired collectively to ensure maximum signal-to-noise ratio. Survey parameters: Geophone A geophone is a seismic receiver that is often chosen in land acquisition to monitor the particle velocity in a certain orientation. A geophone can either be a single-component geophone which is designed to record p-waves (compressional waves), or it can be a multi-component geophone designed to record p-waves and s-waves (shear waves). Geophones require sufficiently strong coupling with the ground to record the true ground motion initiated by the seismic signal. This is of considerable importance for higher frequency components of the seismic signals, which can be altered substantially with respect to their phase and amplitude due to poor coupling. In the figure on the right, a geophone is shown; the conical spike on the geophone is dug into the ground for coupling. As is the case with hydrophones, geophones are often arranged in arrays as well to maximise the signal-to-noise ratio as well as to minimise the influence of surface waves on recorded data. Survey parameters: Sampling interval and Nyquist criterion The seismic signal that needs to be recorded by the receivers is inherently continuous and hence needs to be discretised. The rate at which this continuous signal is discretised is referred to as the sampling interval or sampling rate (see Sampling (signal processing) for more details). According to the Nyquist criterion, the frequency with which the seismic signal needs to be sampled should be at least equal to or greater than twice the maximum frequency component of the signal i.e. fsample ≥ 2fmax,signal. The challenge that remains is that the highest frequency component is usually not known during acquisition to be able to calculatedly determine the sampling rate. Therefore, estimates need to be made of the highest possible frequencies contained within the signal; usually, sampling rates higher than these estimates are preferred to ensure that temporal aliasing does not occur. Survey parameters: Record length Despite the term length, the record length refers to the time duration (typically listed in milliseconds) over which the receivers are active, recording and storing the seismic response of the subsurface. This recording time should usually start slightly before the source is initiated to ensure that the direct waves are received as the first arrivals on the near-offset receivers. Additionally, the record length should be long enough to ensure that the latest expected arrivals are recorded. Typically, for deeper exploration surveys, the record length is adjusted to the order of multiple seconds (6 seconds is common). 15 to 20 seconds is common for deep crustal exploration. Since the recorded traces can always be clipped for later arrivals during data processing, the record length is normally preferred longer than necessary rather than shorter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Micro hydro** Micro hydro: Micro hydro is a type of hydroelectric power that typically produces from 5 kW to 100 kW of electricity using the natural flow of water. Installations below 5 kW are called pico hydro. These installations can provide power to an isolated home or small community, or are sometimes connected to electric power networks, particularly where net metering is offered. Micro hydro: There are many of these installations around the world, particularly in developing nations as they can provide an economical source of energy without the purchase of fuel. Micro hydro systems complement solar PV power systems because in many areas water flow, and thus available hydro power, is highest in the winter when solar energy is at a minimum. Micro hydro is frequently accomplished with a pelton wheel for high head, low flow water supply. The installation is often just a small dammed pool, at the top of a waterfall, with several hundred feet of pipe leading to a small generator housing. In low head sites, generally water wheels and Archimedes' screws are used. Construction: Construction details of a microhydro plant are site-specific. Sometimes an existing mill-pond or other artificial reservoir is available and can be adapted for power production. In general, microhydro systems are made up of a number of components. The most important include the intake where water is diverted from the natural stream, river, or perhaps a waterfall. An intake structure such as a catch box is required to screen out floating debris and fish, using a screen or array of bars to keep out large objects. In temperate climates, this structure must resist ice as well. The intake may have a gate to allow the system to be dewatered for inspection and maintenance. Construction: The intake is then brought through a canal and then forebay. The forebay is used for sediment holding. At the bottom of the system the water is tunneled through a pipeline (penstock) to the powerhouse building containing a turbine. The penstock builds up pressure from the water that has traveled downwards. In mountainous areas, access to the route of the penstock may provide considerable challenges. If the water source and turbine are far apart, the construction of the penstock may be the largest part of the costs of construction. At the turbine, a controlling valve is installed to regulate the flow and the speed of the turbine. The turbine converts the flow and pressure of the water to mechanical energy; the water emerging from the turbine returns to the natural watercourse along a tailrace channel. The turbine turns a generator, which is then connected to electrical loads; this might be directly connected to the power system of a single building in very small installations, or may be connected to a community distribution system for several homes or buildings.Usually, microhydro installations do not have a dam and reservoir, like large hydroelectric plants have, relying on a minimal flow of water to be available year-round. Head and flow characteristics: Microhydro systems are typically set up in areas capable of producing up to 100 kilowatts of electricity. This can be enough to power a home or small business facility. This production range is calculated in terms of "head" and "flow". The higher each of these are, the more power available. Hydraulic head is the pressure measurement of water falling in a pipe expressed as a function of the vertical distance the water falls. This change in elevation is usually measured in feet or meters. A drop of at least 2 feet is required or the system may not be feasible. When quantifying head, both gross and net head must be considered. Gross head approximates power accessibility through the vertical distance measurement alone whereas net head subtracts pressure lost due to friction in piping from the gross head. "Flow" is the actual quantity of water falling from a site and is usually measured in gallons per minute, cubic feet per second, or liters per second. Low flow/high head installations in steep terrain have significant pipe costs. A long penstock starts with low pressure pipe at the top and progressively higher pressure pipe closer to the turbine in order to reduce pipe costs. Head and flow characteristics: The available power, in kilowatts, from such a system can be calculated by the equation P=Q*H/k, where Q is the flow rate in gallons per minute, H is the static head, and k is a constant of 5,310 gal*ft/min*kW. For instance, for a system with a flow of 500 gallons per minute and a static head of 60 feet, the theoretical maximum power output is 5.65 kW. The system is prevented from 100% efficiency (from obtaining all 5.65 kW) due to the real world, such as: turbine efficiency, friction in pipe, and conversion from potential to kinetic energy. Turbine efficiency is generally between 50-80%, and pipe friction is accounted for using the Hazen–Williams equation. Regulation and operation: Typically, an automatic controller operates the turbine inlet valve to maintain constant speed (and frequency) when the load changes on the generator. In a system connected to a grid with multiple sources, the turbine control ensures that power always flows out from the generator to the system. The frequency of the alternating current generated needs to match the local standard utility frequency. In some systems, if the useful load on the generator is not high enough, a load bank may be automatically connected to the generator to dissipate energy not required by the load; while this wastes energy, it may be required if it's not possible to control the water flow through the turbine. Regulation and operation: An induction generator always operates at the grid frequency irrespective of its rotation speed; all that is necessary is to ensure that it is driven by the turbine faster than the synchronous speed so that it generates power rather than consuming it. Other types of generator can use a speed control systems for frequency matching. Regulation and operation: With the availability of modern power electronics it is often easier to operate the generator at an arbitrary frequency and feed its output through an inverter which produces output at grid frequency. Power electronics now allow the use of permanent magnet alternators that produce wild AC to be stabilised. This approach allows low speed / low head water turbines to be competitive; they can run at the best speed for extraction of energy, and the power frequency is controlled by the electronics instead of the generator. Regulation and operation: Very small installations (pico hydro), a few kilowatts or smaller, may generate direct current and charge batteries for peak use times. Turbine types: Several types of water turbines can be used in micro hydro installations, selection depending on the head of water, the volume of flow, and such factors as availability of local maintenance and transport of equipment to the site. For hilly regions where a waterfall of 50 meters or more may be available, a Pelton wheel can be used. For low head installations, Francis or propeller-type turbines are used. Very low head installations of only a few meters may use propeller-type turbines in a pit, or water wheels and Archimedes screws. Small micro hydro installations may successfully use industrial centrifugal pumps, run in reverse as prime movers; while the efficiency may not be as high as a purpose-built runner, the relatively low cost makes the projects economically feasible. Turbine types: In low-head installations, maintenance and mechanism costs can be relatively high. A low-head system moves larger amounts of water, and is more likely to encounter surface debris. For this reason a Banki turbine also called Ossberger turbine, a pressurized self-cleaning crossflow waterwheel, is often preferred for low-head micro hydro systems. Though less efficient, its simpler structure is less expensive than other low-head turbines of the same capacity. Since the water flows in, then out of it, it cleans itself and is less prone to jam with debris. Turbine types: Screw turbine (Reverse Archimedes' screw): two low-head schemes in England, Settle Hydro and Torrs Hydro use an Archimedes' screw which is another debris-tolerant design. Efficiency 85%. Gorlov: the Gorlov helical turbine free stream or constrained flow with or without a dam, Francis and propeller turbines. Kaplan turbine : Is a high flow, low head, propeller-type turbine. An alternative to the traditional kaplan turbine is a large diameter, slow turning, permanent magnet, sloped open flow VLH turbine with efficiencies of 90%. Water wheel : advanced hydraulic water wheels and hydraulic wheel-part reaction turbine can have hydraulic efficiencies of 67% and 85% respectively. Overshot water wheel maximum efficiency (hydraulic efficiency) is 85%. Undershot water wheels can operate with very low head, but also have efficiencies below 30%. Gravitation water vortex power plant : part of the river flow at a weir or natural water fall is diverted into a round basin with a central bottom exit that creates a vortex. A simple rotor (and connected generator) is moved by the kinetic energy. Efficiencies of 83% down to 64% at 1/3 part flow. Use: Microhydro systems are very flexible and can be deployed in a number of different environments. They are dependent on how much water flow the source (creek, river, stream) has and the velocity of the flow of water. Energy can be stored in battery banks at sites that are far from a facility or used in addition to a system that is directly connected so that in times of high demand there is additional reserve energy available. These systems can be designed to minimize community and environmental impact regularly caused by large dams or other mass hydroelectric generation sites. Use: Potential for rural development In relation to rural development, the simplicity and low relative cost of micro hydro systems open up new opportunities for some isolated communities in need of electricity. With only a small stream needed, remote areas can access lighting and communications for homes, medical clinics, schools, and other facilities. Microhydro can even run a certain level of machinery supporting small businesses. Regions along the Andes mountains and in Sri Lanka and China already have similar, active programs. One seemingly unexpected use of such systems in some areas is to keep young community members from moving into more urban regions in order to spur economic growth. Also, as the possibility of financial incentives for less carbon-intensive processes grows, the future of microhydro systems may become more appealing. Use: Micro-hydro installations can also provide multiple uses. For instance, micro-hydro projects in rural Asia have incorporated agro-processing facilities such as rice mills – alongside standard electrification – into the project design. Cost: The cost of a micro hydro plant can be between 1,000 and 5000 U.S. dollars per kW installed Advantages and disadvantages: Advantages Microhydro power is generated through a process that utilizes the natural flow of water. This power is most commonly converted into electricity. With no direct emissions resulting from this conversion process, there are little to no harmful effects on the environment, if planned well, thus supplying power from a renewable source and in a sustainable manner. Microhydro is considered a "run-of-river" system meaning that water diverted from the stream or river is redirected back into the same watercourse. Adding to the potential economic benefits of microhydro is efficiency, reliability, and cost effectiveness. Advantages and disadvantages: Disadvantages Microhydro systems are limited mainly by the characteristics of the site. The most direct limitation comes from small sources with the minuscule flow. Likewise, flow can fluctuate seasonally in some areas. Lastly, though perhaps the foremost disadvantage is the distance from the power source to the site in need of energy. This distributional issue as well as the others are key when considering using a micro-hydro system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ades (drinking water)** Ades (drinking water): AdeS is a plant/soy-based beverage brand that includes a mixture of seeds with fruit juices and vitamins and minerals. In Japan, the drink is also known as I-Lohas. The name comes from the Spanish acronym, "Alimentos de Semillas" which means food from seeds. The brand currently has a presence in Brazil, Mexico, Argentina, Uruguay, Paraguay, Bolivia, Chile and Colombia. History: The product was created in Latin America in 1988. The Coca-Cola Company entered into an agreement on June 1, 2016, to acquire AdeS. It is currently made by PT Coca-Cola Bottling Indonesia in Bekasi, West Java, which also produces Coca-Cola, Fanta and Sprite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Studio transmitter link** Studio transmitter link: A studio transmitter link (or STL) sends a radio station's or television station's audio and video from the broadcast studio or origination facility to a radio transmitter, television transmitter or uplink facility in another location. This is accomplished through the use of terrestrial microwave links or by using fiber optic or other telecommunication connections to the transmitter site. Studio transmitter link: This is often necessary because the best locations for an antenna are on top of a mountain, where a much shorter radio tower is required, but where locating a studio may be impractical. Even in flat regions, the center of the station's allowed coverage area may not be near the studio location or may lie within a populated area where a transmitter would be frowned upon by the community, so the antenna must be placed at a distance from the studio. Studio transmitter link: Depending on the locations that must be connected, a station may choose either a point to point (PTP) link on another special radio frequency, or a newer all-digital wired link via a dedicated data transmission circuit. Radio links can also be digital, or the older analog type, or a hybrid of the two. Even on older all-analog systems, multiple audio and data channels can be sent using subcarriers. Studio transmitter link: Stations that employ an STL usually also have a transmitter/studio link (TSL) to return telemetry information. Both the STL and TSL are considered broadcast auxiliary services (BAS). Transmitter/studio link: The transmitter/studio link (or TSL) of a radio station or television station is a return link which sends telemetry data from the remotely located radio transmitter or television transmitter back to the studio for monitoring purposes. The TSL may return the same way as the STL, or it can be embedded in the station's regular broadcast signal as a subcarrier (for analog stations) or a separate data channel (for digital stations). Analog or digital data such as transmitter power, temperature, VSWR, voltage, modulation level, and other status information are returned so that broadcast engineering staff can correct any problems as soon as possible. These data may be attended to by an automated transmission system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Field-emission microscopy** Field-emission microscopy: Field-emission microscopy (FEM) is an analytical technique that is used in materials science to study the surfaces of needle apexes. The FEM was invented by Erwin Wilhelm Müller in 1936, and it was one of the first surface-analysis instruments that could approach near-atomic resolution. Introduction: Microscopy techniques are utilized to generate magnified real-space images of the surface of a tip apex. Typically, microscopy information pertains to the surface crystallography (i.e., how the atoms are arranged at the surface) and surface morphology (i.e., the shape and size of topographic features making the surface). Field-emission microscopy (FEM) was invented by Erwin Müller in 1936. In FEM, the phenomenon of field electron emission was used to obtain an image on the detector based on the difference in work function of the various crystallographic planes on the surface. Setup and working principle: A field-emission microscope consists of a metallic sample shaped like a sharp tip and a fluorescent screen enclosed within an ultrahigh vacuum chamber. Typically, the tip radius used in this microscope is on the order of 100 nm, and it is made of a metal with a high melting point, such as tungsten. The sample is held at a large negative potential (1–10 kV) relative to the fluorescent screen, which generates an electric field near the tip apex of 2-7 x 109 V/m. This electric field drives field emission of electrons. Setup and working principle: The field-emitted electrons travel along the field lines and produce bright and dark patches on the fluorescent screen, exhibiting a one-to-one correspondence with the crystal planes of the hemispherical emitter. The emission current strongly varies with the local work function, following the Fowler–Nordheim equation. Therefore, the FEM image reflects the projected work function map of the emitter surface. Generally, atomically rough surfaces have lower work functions than closely packed surfaces, resulting in bright areas in the image. In short, the intensity variations of the screen correspond to the work function map of the surface of the tip apex. Setup and working principle: The magnification is given by the ratio M=L/R , where R is the tip apex radius, and L is the tip–screen distance. Linear magnifications of the order of 105 are attained. The FEM technique has a spatial resolution of around 1 - 2 nm. Nonetheless, if a particle with a size of 1 nm is placed on a tip apex, the magnification can increase by a factor of 20, and the spatial resolution is enhanced to approximately 0.3 nm. This situation can be achieved by utilizing single-molecule electron emitters, and it is possible to observe molecular orbitals in single fullerene molecules using FEM.Application of FEM is limited by the materials that can be fabricated in the shape of a sharp tip and can tolerate high electrostatic fields. For these reasons, refractory metals with high melting temperatures (e.g., W, Mo, Pt, Ir) are conventional objects for FEM experiments. In addition, the FEM has also been used to study adsorption and surface diffusion processes, making use of the work-function change associated with the adsorption process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Social technology** Social technology: Social technology is a way of using human, intellectual and digital resources in order to influence social processes. For example, one might use social technology to ease social procedures via social software and social hardware, which might include the use of computers and information technology for governmental procedures or business practices. It has historically referred to two meanings: as a term related to social engineering, a meaning that began in the 19th century, and as a description of social software, a meaning that began in the early 21st century. Social technology is also split between human-oriented technologies and artifact-oriented technologies. History: The term "social technology" was first used at the University of Chicago by Albion Woodbury Small and Charles Richmond Henderson around the end of the 19th century. At a seminar in 1898, Small described social technology as the use of knowledge of the facts and laws of social life to bring about rational social aims. In 1895 Henderson coined the term "social art" for the methods by which improvements to society are introduced. According to Henderson, social art gives directions.In 1901, Henderson published an article titled "The Scope of Social Technology" in which he renamed this social art as 'social technology', and described it as "a system of conscious and purposeful organization of persons in which every actual, natural social organization finds its true place, and all factors in harmony cooperate to realize an increasing aggregate and better proportions of the 'health, wealth, beauty, knowledge, sociability, and rightness' desires." In 1923, the term social technology was given a wider meaning in the works of Ernest Burgess and Thomas D. Eliot, who expanded the definition of social technology to include the application, particularly in social work, of techniques developed by psychology and other social sciences. History: In 1928, Luther Lee Bernard defined applied science as the observation and measurement of norms or standards, which control our relationship with the universe. He then separated this definition from that of social technology by explaining that social technology also "includes administration as well as the determination of the norms which are to be applied in the administration". In 1935, he wrote an article called "The Place of Social Sciences in Modern Education," in which he wrote about the nature of an effective education in the social sciences to reach effective education by the willing masses. It would be of three types: Firstly, "a description of present conditions and trends in society". Secondly, "the teaching of desirable social ends and ideals necessary to correct such social maladjustments as we now have". Thirdly, "a system of social technology which, if applied, might be expected to remedy existing maladjustments and realize valid social ends". Bernard explained that the aspects of social technology which lags behind are the technologies involved in the "less material forms of human welfare". These are the applied sciences of "the control of crime, abolition of poverty, the raising of every normal person to economic, political, and personal competency, the art of good government, or city, rural, and national planning". On the other hand, "the best developed social technologies, such as advertising, finance, and 'practical' politics, are used in the main for antisocial rather than for proper humanitarian ends". History: After the Second World War, the term 'social technology' continued to be used intermittently, for example by the social psychologist Dorwin Cartwright for techniques developed in the science of group dynamics such as 'buzz groups' and role playing and by Olaf Helmer to refer to the Delphi technique for creating a consensus opinion in a panel of experts. More recent examples are Human rights & social technology by Rainer Knopff and Tom Flanagan which addresses both human rights and government policies that ensure them. Another example is Theodore Caplow's Perverse incentives: the neglect of social technology in the public sector, which discusses a wide range of topics, including use of the death penalty to discourage crime and the welfare system to provide for the needy. History: At the current stage of social technology research, two main directions of usage of this term have emerged: (a) human-oriented technologies and (b) artifact-oriented technologies.According to the goal of social technology adaption, technologies oriented toward humans consist of: Technologies of power Fundamental legal regulations System of signs and symbols Participation technologies Group behavior pattern creation Information transfer mediation Eugenics Individual behavior pattern creation Legal norms Technologies of the selfTechnologies oriented toward artifacts consist of: Social interaction technologies Relation creation and sustainment technologies Co-operation technologies Knowledge development technologies Information aggregation technologies Resource compilation technologies Expertise location technologies As "social engineering": Closely related to social technology is the term social engineering. Thorstein Veblen used 'social engineering' in 1891, but suggested that it was used earlier. In the 1930s both 'social engineering and 'social technology' became associated with the large scale socio-economic policies of the Soviet Union. The Soviet economist Yvgeni Preobrazhensky wrote a book in which he defined social technology as "the science of organized production, organized labour, of organized systems of production relations, where the legality of economic existence is expressed in new forms." (p. 55 in the translation of 1963) Karl Popper discusses social technology and social engineering in his book The Open Society and Its Enemies and in the article "The Poverty of Historicism", in which he criticized the Soviet political system and the marxist theory (Marxism) on which it was based. Eventually he combined "The Poverty of Historicism" series in a book "The Poverty of Historicism" which he wrote "in memory of the countless men and women of all creeds or nations or races who fell victim to the fascist and communist belief in Inexorable Laws of Historical Destiny". In his book "The Open Society and Its Enemies", Popper distinguished two kinds of social engineering, and the corresponding social technology. Utopian engineering strives to reach "an ideal state, using a blueprint of society as a whole, is one which demands a strong centralized rule of a few, and which therefore is likely to lead to a dictatorship" (p. 159). Communism is an example of utopian social Technology. On the other hand, there is the piecemeal engineer with its corresponding social technology, which adopts "the method of searching for, and fighting against, the greatest and most urgent evils of society, rather than searching for, and fighting for, its greatest ultimate good" (p. 158). The use of piecemeal social technology is crucial for democratic social reconstruction. As "social software": "Social technology" has also been used as a synonym for "social software", such as in the book Groundswell: Winning in a World Transformed by Social Technologies, by Charlene Li and Josh Bernoff. Social networking service A social networking service is a platform to build social networks or social relations among people who, for example, share interests, activities, backgrounds, or real-life connections. Corporate social media Corporate social media is the use of social media platforms, social media communications, and social media marketing techniques by and within corporations, ranging from small businesses and tiny entrepreneurial startups to mid-size businesses and huge multinational firms. Within the definition of social media, there are different ways corporations utilize it. Although there is no systematic way in which social media applications can be categorized, there are various methods and approaches to having a strong social media presence. Social media currently can be crucial to the success of growing numbers in a companies value chain activities. Enterprise social software Of particular interest in the realm of social computing is social software for enterprise. Sometimes referred to as "Enterprise 2.0", a term derived from Web 2.0, this generally refers to the use of social computing in corporate intranets and in other medium and large-scale business environments. "Social technology" is also used to refer to the organization and management of private companies, and is sometimes taught under the auspices of university business schools. One book with this orientation is The social technology of organization development, by Warner and Hornstein. As "social software": Social technology changes the way that people communicate; for instance, it enables people across the world to collaborate. This technology shapes society and thus could be considered as a disruptive technology.Chief Strategy Officer at Jive Software, Christopher Morace, explains that "social technology is changing the way businesses operate and how successful companies are leveraging it to their advantage." Some of the key drivers of a business provided by the use of social technology are collaboration, open communication, and a large network. In addition, business professionals must maintain digital literacy in order to understand the capabilities of social technologies and incorporate them into daily function. Other uses: Social technology can provide opportunities for digital activism. It eliminates geographic boundaries, potentially enabling protests and revolutions to spread through social technologies. It can also be argued that digital activism through social technology does not produce concrete results, as people might lose sight of what drives the social movement and ultimately participate in "clicktivism." Due to technological advances, social technology could potentially redefine what it means to be an activist. Social technology is also a prevalent influence in the realm of e-commerce. "The development and rapid growth of mobile computing and smartphones have also facilitated social commerce." Marketing strategies have evolved over the years to conform and align with social technology.In 1985, MacKenzie published a book titled The social shaping of technology. It showed that technological change is often seen as something that follows its own logic, and introduced about the relation of technology to society and different types of technology are examined: the technology of production; domestic and reproductive technology; and military technology. It moves on to the technologies of the household and biological reproduction, and it also asks what shapes the most frightening technology of all––the technology of weaponry, especially nuclear weapons. Other uses: In 2011, Leibetseder, Bettina. published his article "A Critical Review on the Concept of Social Technology". He pointed that social technology provides social science knowledge for a purpose. Such a notion allows an in depth debate about the meaning of social order in modern societies. Social technology forms the basis of governmental decisions; it allows for a use of social theories and methods for a purpose in politics and introduces a specific conception of power between the individual and public powers. Concerns: Social technologies, as they are technologies dealing with social behaviors or interactions, have caused concerns among philosophers. As Vladislav A. Lektorsky pointed out in his journal, "The Russian philosopher Viacheslav Stëpin calls modern European civilization "technogenic." Initially, this meant the pursuit of technologies for the control of natural phenomena. Then projects began to be put forward for social technologies for the control of social processes. Based on this concept, impacts that social technology might have for man, like "Forcible Collectivization", or the deportation of ethnic groups are recognized because according to Vladislav, social technology blunts the individual's capacity for critical reflection, though it "presents a different possibility which be used to develop man’s creative capacities, to expand his realm of freedom and his social and interpersonal ties."Similarly, social technology also poses potential threats to human rights. These concerns are based on the notion that humans are a product of their environment. "Social technology assumes that it is possible to know the societal or 'systematic' determinants of human 'behavior' in a way that permits them to be manipulated and controlled." Technology can also overcome certain social forces.Social technologies have also caused concern among social scientists. According to a study conducted by the Cambridge University Press, it is possible for social technologies to manipulate social processes, including relationship development and group dynamics. Variables such as gender and social status can influence a person's behavior, and these behavior changes can translate to interactions through technology. Social technologies also relate to the theory of technological determinism, which states that "technology has universal effects on social processes."As the online internet presence of the general population grows, the popularity of social technology increases, which creates a culture of sharing. Internet users develop more connections online due to the global activity on the internet, and as services make it possible to upload content, they likewise facilitate widespread distribution of information. As opinions circulate online, concerns over new problems arise. Other similar phrases: In general, social technology covers many other terms in social science, as some authors use "social technique", "social pedagogy", "administrative technique", "technocracy", "socio-technique", "political science engineering", "planned society", "efficiency engineer", "social (economic) planning"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infinite switch** Infinite switch: An infinite switch, simmerstat, energy regulator or infinite controller is a type of switch that allows variable power output of a heating element of an electric stove. It is called "infinite" because its average output is infinitely variable rather than being limited to a few switched levels. It uses a bi-metallic strip conductive connection across terminals that disconnects with increased temperature. As current passes through the bimetal connection, it will heat and deform, breaking the connection and turning off the power. After a short time, the bimetal will cool and reconnect. Infinite switches vary the average power delivered to a device by switching frequently between on and off states. They may be used for situations that are not sensitive to such changes, such as the resistive heating elements in electric stoves and kilns. Disadvantages of frequently cycled mechanical contacts may include erosion of the switch contacts if the contacts move slowly by design or due to contamination so that power from electrical arcing between closely spaced contacts has time to accumulate and overheat or melt the contact surface. Operating the device contacts at low current to control a relay to preserve the contacts from thermal damage causes rapid failure as condensed volatile organic compounds, more familiarly known as kitchen grease, accumulate and insulate the contacts. The pulse of radio frequency noise emitted by any electrical arc is contained within the metal enclosure required for a safe design in the event of a component or wire insulation failure. The mechanical contact infinite switch is unsuitable for resistive loads more than 15 A at 240 V or 3.6 kW. The contacts have a reduced operating life when used for inductive loads like electric motors or induction heating.The infinite switch is comparable to a slow pulse-width modulation device, but both the duty cycle and the cycle rate vary. The device is also conceptually similar to a bimetallic switching thermostat, except that the infinite switch responds to the temperature of its internal heater as a model for the temperature of the "burner" being controlled instead of sensing the temperature being controlled directly.A rotary hand control turns a shaft that is connected within the infinite switch to a cam and follower. At the off position both power line connections are interrupted. At all on positions the manual contacts connect one of the line terminals to the pilot lamp terminal and one of the terminals for the heating element being controlled. At the maximum heat position the cam follower applies sufficient force for the cycling contacts to remain closed at all times. At the other positions the cam follower applies less force allowing the cycling contacts to cycle. Initially, the cycling contacts close in all positions and a permanent magnet pulls them to remain closed. One of the two cycling contacts energizes the controlled heating element and the other energizes a very small electric heating element attached to a bimetallic strip. The force from the heated bimetallic strip increases as its temperature rises until it overcomes the combined force from the cam follower, an unheated bimetallic strip that compensates for ambient temperature, and the permanent magnet allowing the cycling contacts open which interrupts the electric current to both heating elements. Both the bimetallic strip and the "burner" begin to cool. Eventually the heated bimetallic strip will no longer have enough force to overcome the force from the cam follower and the unheated bimetallic strip. As the contacts approach each other the permanent magnet attracts the contact beam rapidly closing the contacts and the cycle repeats.Controls for domestic cooking generally have a 5% on time at the minimum setting. Controls for commercial cooking generally have a 22.5% on time at the minimum setting. Controls for warming trays, kilns and other applications may have a higher minimum setting to provide fine control over a range useful for the application.An important variation of the control is the series or current sensitive type as opposed to the parallel or voltage sensitive type. The difference between the types is whether the internal heater within the infinite switch becomes connected in series with the controlled heating element, that is to one power line terminal and the controlled heating element terminal or it becomes connected to the two power line terminals. The preceding discussion referring to two cycling contacts presumes the parallel or voltage sensitive type. The current sensitive type has the effect of compensating the control for the temperature of the controlled heating element because the resistance of nickel chromium heating elements changes significantly with temperature. When cool the heating element has a lower resistance and dissipates more power than it does when hot. This has the effect of shrinking the range of power delivered by the voltage sensitive type when the heating element is cooled, by a vessel containing water for example. The current sensitive type makes it easier to control the rate at which a pot boils. The current sensitive type is specific to nominal current of the heating element being controlled and can't be used for applications that allow the power rating of the heating element to be changed for different cooking tasks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kelly drive** Kelly drive: A kelly drive is a type of well drilling device on an oil or gas drilling rig that employs a section of pipe with a polygonal (three-, four-, six-, or eight-sided) or splined outer surface, which passes through the matching polygonal or splined kelly (mating) bushing and rotary table. This bushing is rotated via the rotary table and thus the pipe and the attached drill string turn while the polygonal pipe is free to slide vertically in the bushing as the bit digs the well deeper. When drilling, the drill bit is attached at the end of the drill string and thus the kelly drive provides the means to turn the bit (assuming that a downhole motor is not being used). Kelly drive: The kelly is the polygonal tubing and the kelly bushing is the mechanical device that turns the kelly when rotated by the rotary table. Together they are referred to as a kelly drive. The upper end of the kelly is screwed into the swivel, using a left-hand thread to preclude loosening from the right-hand torque applied below. The kelly typically is about 10 ft (3 m) longer than the drill pipe segments, thus leaving a portion of newly drilled hole open below the bit after a new length of pipe has been added ("making a connection") and the drill string has been lowered until the kelly bushing engages again in the rotary table. Kelly drive: The kelly hose is the flexible, high-pressure hose connected from the standpipe to a gooseneck pipe on a swivel above the kelly and allows the free vertical movement of the kelly while facilitating the flow of the drilling fluid down the drill string. It generally is of steel-reinforced rubber construction but also assemblies of Chiksan steel pipe and swivels are used. Kelly drive: The kelly is below the swivel. It is a pipe with either four or six flat sides. A rotary bushing fits around the flat sides to provide the torque needed to turn the kelly and the drill string. Rollers in the bushing permit the kelly free movement vertically while rotating. Since kelly threads would be difficult to replace, normally the lower end of the kelly has saver sub — or a short piece of pipe — that can be refurbished more cheaply than the kelly. Usually, a ball valve, called the lower kelly cock, is positioned between the kelly and the kelly saver sub. This valve is used for well control if the surface pressure becomes too high for the rotary hose or surface conditions. Kelly drive: According to the ″Dictionary of Petroleum Exploration, Drilling and Production″, ″[The] kelly was named after Michael J. (King) Kelly, a Chicago baseball player (1880-1887) who was known for his base running and long slides.″ As steel was not as good a quality back then as it was today, the kelly was always hanging up in the kelly bushing. There was a popular song for the famous baseball player at that time called, ″Slide Kelly, Slide!″ ″Sliding″ is the act of a kelly passing through the kelly bushing as it drills. Kelly drive: Alternatively, some say the name kelly is derived from the machine shop in which the first kelly was made. It was located in a town formerly named Kellysburg, Pennsylvania.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Familial encephalopathy with neuroserpin inclusion bodies** Familial encephalopathy with neuroserpin inclusion bodies: Familial encephalopathy with neuroserpin inclusion bodies (FENIB) is a progressive disorder of the nervous system that is characterized by a loss of intellectual functioning (dementia) and seizures. At first, affected individuals may have difficulty sustaining attention and concentrating. Their judgment, insight, and memory become impaired as the condition progresses. Over time, they lose the ability to perform the activities of daily living, and most people with this condition eventually require comprehensive care.The signs and symptoms of familial encephalopathy with neuroserpin inclusion bodies vary in their severity and age of onset. In severe cases, the condition causes seizures and episodes of sudden, involuntary muscle jerking or twitching (myoclonus) in addition to dementia. These signs can appear as early as a person's teens. Less severe cases are characterized by a progressive decline in intellectual functioning beginning in a person's forties or fifties.Mutations in the SERPINI1 gene cause familial encephalopathy with neuroserpin inclusion bodies. The SERPINI1 gene provides instructions for making a protein called neuroserpin. This protein is found in nerve cells, where it plays a role in the development and function of the nervous system. Neuroserpin helps control the growth of nerve cells and their connections with one another, which suggests that this protein may be important for learning and memory. Mutations in the gene result in the production of an abnormally shaped, unstable version of neuroserpin. Abnormal neuroserpin proteins can attach to one another and form neuroserpin inclusion bodies or Collins bodies within nerve cells. Collins bodies form in cortical and subcortical neurons where they disrupt the cells' normal functioning and ultimately lead to cell death. Progressive dementia results from this gradual loss of nerve cells in certain parts of the brain. Researchers believe that a buildup of related, potentially toxic substances in nerve cells may also contribute to the signs and symptoms of this condition.This condition is inherited in an autosomal dominant pattern, which means one copy of the altered gene in each cell is sufficient to cause the disorder. In many cases, an affected person has a parent with the condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Inferior anal nerves** Inferior anal nerves: The Inferior rectal nerves (inferior anal nerves, inferior hemorrhoidal nerve) usually branch from the pudendal nerve but occasionally arises directly from the sacral plexus; they cross the ischiorectal fossa along with the inferior rectal artery and veins, toward the anal canal and the lower end of the rectum, and is distributed to the Sphincter ani externus (external anal sphincter, EAS) and to the integument (skin) around the anus. Inferior anal nerves: Branches of this nerve communicate with the perineal branch of the posterior femoral cutaneous and with the posterior scrotal nerves at the forepart of the perineum. Supplies: Cutaneous innervation below the pectinate line and external anal sphincter.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonic acid** Carbonic acid: In chemistry, carbonic acid is an organic compound with the chemical formula H2CO3. The molecule rapidly converts to water and carbon dioxide in the presence of water. However, in the absence of water, it is (contrary to popular belief) quite stable at room temperature. The interconversion of carbon dioxide and carbonic acid is related to the breathing cycle of animals and the acidification of natural waters.In biochemistry and physiology, the name "carbonic acid" is sometimes incorrectly applied to aqueous solutions of carbon dioxide. These chemical species play an important role in the bicarbonate buffer system, used to maintain acid–base homeostasis. Terminology in biochemical literature: In chemistry, the term "carbonic acid" strictly refers to the chemical compound with the formula H2CO3. Some biochemistry literature effaces the distinction between carbonic acid and carbon dioxide dissolved in extracellular fluid. In physiology, carbon dioxide excreted by the lungs may be called volatile acid or respiratory acid. Anhydrous carbonic acid: At ambient temperatures, pure carbonic acid is a stable gas. There are two main methods to produce anhydrous carbonic acid: reaction of hydrogen chloride and potassium bicarbonate at 100 K in methanol and proton irradiation of pure solid carbon dioxide. Chemically, it behaves as a diprotic Brønsted acid.Carbonic acid monomers exhibit three conformational isomers: cis–cis, cis–trans, and trans–trans.At low temperatures and atmospheric pressure, solid carbonic acid is amorphous and lacks Bragg peaks in X-ray diffraction. But at high pressure, carbonic acid crystallizes, and modern analytical spectroscopy can measure its geometry. According to neutron diffraction of dideuterated carbonic acid (D2CO3) in a hybrid clamped cell (Russian alloy/copper-beryllium) at 1.85 GPa, the molecules are planar and form dimers joined by pairs of hydrogen bonds. All three C-O bonds are nearly equidistant at 1.34 Å, intermediate between typical C-O and C=O distances (respectively 1.43 and 1.23 Å). The unusual C-O bond lengths are attributed to delocalized π bonding in the molecule's center and extraordinarily strong hydrogen bonds. The same effects also induce a very short O—O separation (2.13 Å), through the 136° O-H-O angle imposed by the doubly hydrogen-bonded 8-membered rings. Longer O—O distances are observed in strong intramolecular hydrogen bonds, e.g. in oxalic acid, where the distances exceed 2.4 Å. In aqueous solution: In even a slight presence of water, carbonic acid dehydrates to carbon dioxide and water, which then catalyzes further decomposition. For this reason, carbon dioxide can be considered the carbonic acid anhydride. The hydration equilibrium constant at 25 °C is [H2CO3]/[CO2] ≈ 1.7×10−3 in pure water and ≈ 1.2×10−3 in seawater. Hence the majority of carbon dioxide at geophysical or biological air-water interfaces does not convert to carbonic acid, remaining dissolved CO2 gas. However, the uncatalyzed equilibrium is reached quite slowly: the rate constants are 0.039 s−1 for hydration and 23 s−1 for dehydration. In aqueous solution: In biological solutions In the presence of the enzyme carbonic anhydrase, equilibrium is instead reached rapidly, and the following reaction takes precedence: When the created carbon dioxide exceeds its solubility, gas evolves and a third equilibrium must also be taken into consideration. The equilibrium constant for this reaction is defined by Henry's law. The two reactions can be combined for the equilibrium in solution: When Henry's law is used to calculate the denominator care is needed with regard to units since Henry's Law constant can be commonly expressed with 8 different dimensionalities. In aqueous solution: Under high CO2 partial pressure In the beverage industry, sparkling or "fizzy water" is usually referred to as carbonated water. It is made by dissolving carbon dioxide under a small positive pressure in water. Many soft drinks treated the same way effervesce. In aqueous solution: Significant amounts of molecular H2CO3 exist in aqueous solutions subjected to pressures of multiple gigapascals (tens of thousands of atmospheres) in planetary interiors. Pressures of 0.6–1.6 GPa at 100 K, and 0.75–1.75 GPa at 300 K are attained in the cores of large icy satellites such as Ganymede, Callisto, and Titan, where water and carbon dioxide are present. Pure carbonic acid, being denser, is expected to have sunk under the ice layers and separate them from the rocky cores of these moons. Relationship to bicarbonate and carbonate: Carbonic acid is the formal Brønsted-Lowry conjugate acid of the bicarbonate anion, stable in alkaline solution. The protonation constants have been measured to great precision, but depend on overall ionic strength I. The two equilibria most easily measured are as follows: where brackets indicate the concentration of specie. At 25 °C, these equilibria empirically satisfyNote that log(β1) decreases with increasing I, as does log(β2). In a solution absent other ions (e.g. I = 0), these curves imply the following stepwise dissociation constants: Direct values for these constants in the literature include pK1 = 6.35 and pK2 - pK1 = 3.49.To interpret these numbers, note that two chemical species in an acid equilibrium are equiconcentrated when pK = pH. In particular, the extracellular fluid (cytosol) in biological systems exhibits pH ≈ 7.2, so that carbonic acid will be almost 50%-dissociated at equilibrium. Relationship to bicarbonate and carbonate: Ocean acidification The Bjerrum plot shows typical equilibrium concentrations, in solution, in seawater, of carbon dioxide and the various species derived from it, as a function of pH. As human industrialization has increased the proportion of carbon dioxide in Earth's atmosphere, the proportion of carbon dioxide dissolved in sea- and freshwater as carbonic acid is also expected to increase. This rise in dissolved acid is also expected to acidify those waters, generating a decrease in pH. It has been estimated that the increase in dissolved carbon dioxide has already caused the ocean's average surface pH to decrease by about 0.1 from pre-industrial levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of the ACM** Journal of the ACM: The Journal of the ACM is a peer-reviewed scientific journal covering computer science in general, especially theoretical aspects. It is an official journal of the Association for Computing Machinery. Its current editor-in-chief is Venkatesan Guruswami. The journal was established in 1954 and "computer scientists universally hold the Journal of the ACM in high esteem".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diaphragm (optics)** Diaphragm (optics): In optics, a diaphragm is a thin opaque structure with an opening (aperture) at its center. The role of the diaphragm is to stop the passage of light, except for the light passing through the aperture. Thus it is also called a stop (an aperture stop, if it limits the brightness of light reaching the focal plane, or a field stop or flare stop for other uses of diaphragms in lenses). The diaphragm is placed in the light path of a lens or objective, and the size of the aperture regulates the amount of light that passes through the lens. The centre of the diaphragm's aperture coincides with the optical axis of the lens system. Most modern cameras use a type of adjustable diaphragm known as an iris diaphragm, and often referred to simply as an iris. Diaphragm (optics): See the articles on aperture and f-number for the photographic effect and system of quantification of varying the opening in the diaphragm. Iris diaphragms versus other types: A natural optical system that has a diaphragm and an aperture is the human eye. The iris is the diaphragm, the pupil is the aperture. In the human eye, the iris can both constrict and dilate, which varies the size of the pupil. Unsurprisingly, a photographic lens with the ability to continuously vary the size of its aperture (the hole in the middle of the annular structure) is known as an iris diaphragm. Iris diaphragms versus other types: An iris diaphragm can reduce the amount of light that hits a detector by decreasing the aperture, usually with "leaves" or "blades" that form a circle.In the early years of photography, a lens could be fitted with one of a set of interchangeable diaphragms, often as brass strips known as Waterhouse stops or Waterhouse diaphragms. The iris diaphragm in most modern still and video cameras is adjusted by movable blades, simulating the iris of the eye. The diaphragm has two to twenty blades (with most lenses today featuring between five and ten blades), depending on price and quality of the device in which it is used. Straight blades result in polygon shape of the diaphragm opening, while curved blades improve the roundness of the iris opening. In a photograph, the number of blades that the iris diaphragm has can be guessed by counting the number of diffraction spikes converging from a light source or bright reflection. For an odd number of blades, there are twice as many spikes as there are blades. In case of an even number of blades, the two spikes per blade will overlap each other, so the number of spikes visible will be the number of blades in the diaphragm used. This is most apparent in pictures taken in the dark with small bright spots, for example night cityscapes. Some cameras, such as the Olympus XA or lenses such as the MC Zenitar-ME1, however, use a two-bladed diaphragm with right-angle blades creating a square aperture. Iris diaphragms versus other types: Similarly, out-of-focus points of light (circles of confusion) appear as polygons with the same number of sides as the aperture has blades. If the blurred light is circular, then it can be inferred that the aperture is either round or the image was shot "wide-open" (with the blades recessed into the sides of the lens, allowing the interior edge of the lens barrel to effectively become the iris). Iris diaphragms versus other types: The shape of the iris opening has a direct relation with the appearance of the blurred out-of-focus areas in an image called bokeh. A rounder opening produces softer and more natural out-of-focus areas. Some lenses utilize specially shaped diaphragms in order to create certain effects. This includes the diffusion discs or sieve aperture of the Rodenstock Tiefenbildner-Imagon, Fuji and Sima soft focus lenses, the sector aperture of Seibold's Dreamagon, or the circular apodization filter in the Minolta/Sony Smooth Trans Focus or Fujifilm APD lenses. Iris diaphragms versus other types: Some modern automatic point-and-shoot cameras do not have a diaphragm at all, and simulate aperture changes by using an automatic ND filter. Unlike a real diaphragm, this has no effect on depth of field. A real diaphragm when more-closed will cause the depth of field to increase (i.e., cause the background and the subject to both appear more in-focus at the same time) and if the diaphragm is opened up again the depth of field will decrease (i.e., the background and foreground will share less and less of the same focal plane). History: In his 1567 work La Pratica della Perspettiva Venetian nobleman Daniele Barbaro (1514–1570) described using a camera obscura with a biconvex lens as a drawing aid and points out that the picture is more vivid if the lens is covered as much as to leave a circumference in the middle.In 1762, Leonhard Euler says with respect to telescopes that, "it is necessary likewise to furnish the inside of the tube with one or more diaphragms, perforated with a small circular aperture, the better to exclude all extraneous light." In 1867, Désiré van Monckhoven, in one of the earliest books on photographic optics, draws a distinction betweens stops and diaphragms in photography, but not in optics, saying: "Let us see what takes place when the stop is removed from the lens to a proper distance. In this case the stop becomes a diaphragm. History: * In optics, stop and diaphragm are synonyms. But in photographic optics they are only so by an unfortunate confusion of language. The stop reduces the lens to its central aperture; the diaphragm, on the contrary, allows all the segments of the lens to act, but only on the different radiating points placed symmetrically and concentrically in relation to the axis of the lens, or of the system of lenses (of which the axis is, besides, in every case common)."This distinction was maintained in Wall's 1889 Dictionary of Photography (see figure), but disappeared after Ernst Abbe's theory of stops unified these concepts. History: According to Rudolf Kingslake, the inventor of the iris diaphragm is unknown. Others credit Joseph Nicéphore Niépce for this device, around 1820. J. H. Brown, a member of the Royal Microscopical Society, appears to have invented a popular improved iris diaphragm by 1867.Kingslake has more definite histories for some other diaphragm types, such as M. Noton's adjustable cat eye diaphragm of two sliding squares in 1856, and the Waterhouse stops of John Waterhouse in 1858. History: The Hamburg Observatory-Bergedorf location had a 60 cm (~23.6 inch) aperture Great Refractor by Reposold and Steinheil (Lenses). One unique feature of Hamburg Great Refractor is Iris diaphragm that allows the aperture to be adjusted from 5 to 60 cm. This telescope was activated in the early 1910s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hamman's sign** Hamman's sign: Hamman's sign (rarely, Hammond's sign or Hammond's crunch) is a crunching, rasping sound, synchronous with the heartbeat, heard over the precordium in spontaneous mediastinal emphysema. It is felt to result from the heart beating against air-filled tissues. It is named after Johns Hopkins clinician Louis Hamman, M.D.This sound is heard best over the left lateral position. It has been described as a series of precordial crackles that correlate with the heart beat rather than respiration. Causes: Hamman's crunch is caused by pneumomediastinum or pneumopericardium, and is associated with tracheobronchial injury due to trauma, medical procedures (e.g., bronchoscopy) or rupture of a proximal pulmonary bleb. It can be seen with Boerhaave syndrome.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Model-based systems engineering** Model-based systems engineering: Model-based systems engineering (MBSE), according to the International Council on Systems Engineering (INCOSE), is the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. MBSE is a technical approach to systems engineering that focuses on creating and exploiting domain models as the primary means of information exchange, rather than on document-based information exchange. MBSE technical approaches are commonly applied to a wide range of industries with complex systems, such as aerospace, defense, rail, automotive, manufacturing, etc. History: The first known prominent public usage of the term "Model-Based Systems Engineering" is a book by A. Wayne Wymore with the same name. The MBSE term was also commonly used among the SysML Partners consortium during the formative years of their Systems Modeling Language (SysML) open source specification project during 2003-2005, so they could distinguish SysML from its parent language UML v2, where the latter was software-centric and associated with the term Model-Driven Development (MDD). The standardization of SysML in 2006 resulted in widespread modeling tool support for it and associated MBSE processes that emphasized SysML as their lingua franca. In September 2007, the MBSE approach was further generalized and popularized when INCOSE introduced its "MBSE 2020 Vision", which was not restricted to SysML, and supported other competitive modeling language standards, such as AP233, HLA, and Modelica. According to the MBSE 2020 Vision: "MBSE is expected to replace the document-centric approach that has been practiced by systems engineers in the past and to influence the future practice of systems engineering by being fully integrated into the definition of systems engineering processes." As of 2014, the scope of MBSE started to cover more Modeling and Simulation topics, in an attempt to bridge the gap between system model specifications and related system software simulations. As a consequence, the term "modeling and simulation-based systems engineering" has also been increasingly associated along with MBSE.According to the INCOSE SEBoK (Systems Engineering Book of Knowledge) MBSE may be considered a "subset of digital engineering". INCOSE hosts an annual meeting on MBSE, as well as MBSE working groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SPTBN1** SPTBN1: Spectrin beta chain, brain 1 is a protein that in humans is encoded by the SPTBN1 gene. Function: Spectrin is an actin crosslinking and molecular scaffold protein that links the plasma membrane to the actin cytoskeleton, and functions in the determination of cell shape, arrangement of transmembrane proteins, and organization of organelles. It is composed of two antiparallel dimers of alpha- and beta- subunits. This gene is one member of a family of beta-spectrin genes. The encoded protein contains an N-terminal actin-binding domain, and 17 spectrin repeats that are involved in dimer formation. Multiple transcript variants encoding different isoforms have been found for this gene. Interactions: SPTBN1 has been shown to interact with Merlin. Model organisms: Model organisms have been used in the study of spectrin function. A conditional knockout mouse line, called Spnb2tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty seven tests were carried out on mutant mice and four significant abnormalities were observed. Few homozygous mutant embryos were identified during gestation and those that were present displayed oedema. None survived until weaning. The remaining tests were carried out on heterozygous mutant adult mice. These animals had a decreased length of long bones, while males also displayed hypoalbuminemia .
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Differences between Shinjitai and Simplified characters** Differences between Shinjitai and Simplified characters: Differences between Shinjitai and Simplified characters in the Japanese and Chinese languages exist. List of different simplifications: The old and new forms of the Kyōiku Kanji and their Hànzì equivalents are listed below.In the following lists, the characters are sorted by the radicals of the Japanese kanji. The two Kokuji 働 and 畑 in the Kyōiku Kanji List, which have no Chinese equivalents, are not listed here; in Japanese, neither character was affected by the simplifications. List of different simplifications: No simplification in either language(The following characters were simplified neither in Japanese nor in Chinese.) 一 丁 下 三 不 天 五 民 正 平 可 再 百 否 武 夏 中 内 出 本 世 申 由 史 冊 央 向 曲 印 州 表 果 半 必 永 求 九 丸 千 久 少 夫 午 失 末 未 包 年 危 后 兵 我 束 卵 承 垂 刷 重 省 看 勉 七 乳 才 予 事 二 元 亡 六 主 市 交 忘 夜 育 京 卒 商 率 就 人 化 今 仁 付 代 仕 他 令 以 合 全 任 休 件 仲 作 何 位 住 余 低 似 命 使 念 例 供 信 保 便 値 修 借 候 倍 俳 俵 健 停 働 像 先 入 八 分 公 共 弟 並 典 前 益 善 尊 同 周 次 兆 冷 弱 刀 切 別 判 制 券 刻 副 割 力 加 助 努 勇 勤 句 北 疑 十 古 孝 直 南 真 裁 博 上 反 灰 厚 原 台 能 友 収 口 司 右 兄 吸 告 君 味 呼 品 唱 器 四 回 因 困 固 土 去 地 在 寺 均 志 坂 幸 型 城 基 域 喜 境 士 冬 各 夕 外 名 多 大 太 奏 女 好 始 妻 姉 妹 姿 子 存 安 字 守 宅 宇 完 定 官 宙 宗 室 客 宣 家 害 案 容 宮 寄 密 宿 寒 富 察 寸 小 光 常 堂 尺 局 居 屋 展 山 岸 岩 炭 川 工 左 功 己 改 布 希 干 刊 幼 序 店 底 府 度 座 席 庭 康 延 建 式 弓 引 強 形 役 往 径 待 律 徒 得 街 心 快 性 忠 急 恩 情 感 想 成 戸 所 手 打 投 折 技 批 招 持 指 拾 接 推 探 授 提 操 支 政 故 教 救 散 敬 文 新 方 放 旅 族 旗 日 早 明 易 昔 春 星 昨 映 昭 最 量 景 晴 暗 暖 暴 曜 月 木 札 材 村 板 林 松 枚 枝 相 査 染 柱 格 校 根 株 械 植 棒 森 模 歌 止 整 死 列 段 母 毒 比 毛 氏 水 池 汽 法 治 波 油 注 河 泣 沿 泳 洋 活 派 洗 流 消 酒 浴 深 混 清 液 港 測 湖 源 演 潮 激 火 然 照 熟 燃 受 父 片 版 牛 物 牧 特 犬 犯 王 玉 班 理 球 望 生 用 田 男 町 思 界 胃 留 略 病 痛 登 白 的 皇 泉 皮 皿 盛 盟 目 具 眼 矢 知 短 石 砂 破 磁 示 祭 禁 利 私 和 委 季 科 秋 秒 移 税 程 穴 究 空 立 童 竹 笑 第 笛 等 答 策 筋 算 管 箱 米 料 粉 精 糖 素 置 罪 羊 美 差 着 群 羽 翌 老 考 耕 耳 取 有 肉 服 肥 背 肺 胸 期 朝 腹 臣 自 息 至 舌 航 船 良 色 花 苦 若 英 芽 草 茶 荷 菜 落 幕 墓 蒸 暮 血 行 衣 初 西 要 票 角 解 言 警 谷 欲 豆 象 赤 走 起 足 路 身 射 返 近 述 送 追 退 逆 迷 通 速 造 道 郡 部 配 酸 番 里 野 防 限 院 降 除 陛 障 集 雨 雪 青 非 悲 面 革 音 章 意 食 首 骨 高 Same simplification in both languages(Order: Kyūjitai / Traditional Chinese form - Shinjitai / Simplified Chinese form) 萬-万, 畫-画, 晝-昼, 蠶-蚕, 舊-旧, 臺-台, 爭-争, 來-来, 寫-写, 區-区, 醫-医, 點-点, 參-参, 號-号, 國-国, 聲-声, 條-条, 學-学, 寶-宝, 當-当, 黨-党, 屆-届, 屬-属, 擔-担, 數-数, 斷-断, 暑-暑, 橫-横, 殘-残, 淺-浅, 溫-温, 燈-灯, 狀-状, 將-将, 獨-独, 硏-研, 禮-礼, 社-社, 神-神, 祖-祖, 祝-祝, 福-福, 祕-秘, 署-署, 者-者, 朗-朗, 亂-乱, 辭-辞, 蟲-虫, 都-都, 靜-静, 麥-麦, 黃-黄, 會-会, 體-体, 裝-装 About 30 % of the simplified Chinese characters match the Japanese shinjitai. Simplification in Japan only(Order: Kyūjitai / Traditional - Shinjitai)豫-予, 冰-氷, 罐-缶, 圍-囲, 巢-巣, 乘-乗, 佛-仏, 假-仮, 舍-舎, 效-効, 增-増, 卷-巻, 德-徳, 拜-拝, 濱-浜, 藏-蔵, 黑-黒, 窗-窓, 缺-欠, 步-歩, 每-毎, 辨/瓣/辯-弁, 稻-稲 Simplification in PRC only (not exhaustive)(Order: Kyūjitai / Traditional Chinese form / Modern Japanese form - Simplified Chinese form) 業-业, 東-东, 島-岛, 劇-剧, 願-愿, 裏-里, 係-系, 個-个, 倉-仓, 側-侧, 備-备, 傷-伤, 億-亿, 優-优, 貧-贫, 興-兴, 軍-军, 創-创, 動-动, 勢-势, 協-协, 準-准, 幹-干, 員-员, 鳴-鸣, 園-园, 場-场, 報-报, 執-执, 奮-奋, 婦-妇, 孫-孙, 憲-宪, 導-导, 層-层, 災-灾, 順-顺, 帳-帐, 庫-库, 張-张, 後-后, 術-术, 復-复, 衛-卫, 態-态, 慣-惯, 採-采, 捨-舍, 揮-挥, 損-损, 漢-汉, 敵-敌, 時-时, 題-题, 極-极, 構-构, 標-标, 機-机, 樹-树, 橋-桥, 決-决, 減-减, 測-测, 湯-汤, 漁-渔, 潔-洁, 無-无, 熱-热, 愛-爱, 現-现, 節-节, 聖-圣, 穀-谷, 異-异, 務-务, 確-确, 種-种, 積-积, 殺-杀, 競-竞, 筆-笔, 築-筑, 簡-简, 約-约, 級-级, 紅-红, 紀-纪, 紙-纸, 納-纳, 純-纯, 組-组, 終-终, 細-细, 結-结, 給-给, 統-统, 絹-绢, 綿-绵, 線-线, 網-网, 緯-纬, 編-编, 縮-缩, 績-绩, 織-织, 買-买, 義-义, 養-养, 習-习, 職-职, 書-书, 脈-脉, 勝-胜, 腸-肠, 臨-临, 葉-叶, 夢-梦, 衆-众, 補-补, 製-制, 複-复, 見-见, 規-规, 親-亲, 計-计, 記-记, 討-讨, 訓-训, 設-设, 許-许, 訪-访, 評-评, 詞-词, 話-话, 試-试, 詩-诗, 誠-诚, 語-语, 認-认, 誤-误, 誌-志, 調-调, 論-论, 談-谈, 課-课, 誕-诞, 講-讲, 謝-谢, 識-识, 議-议, 護-护, 頭-头, 貝-贝, 負-负, 則-则, 財-财, 敗-败, 責-责, 貨-货, 費-费, 貸-贷, 視-视, 貴-贵, 貯-贮, 賀-贺, 貿-贸, 資-资, 賃-赁, 質-质, 車-车, 輪-轮, 輸-输, 農-农, 連-连, 進-进, 週-周, 過-过, 運-运, 達-达, 遊-游, 遠-远, 適-适, 選-选, 遺-遗, 郵-邮, 針-针, 銀-银, 銅-铜, 鋼-钢, 鏡-镜, 長-长, 門-门, 問-问, 閉-闭, 間-间, 開-开, 聞-闻, 閣-阁, 陸-陆, 隊-队, 階-阶, 陽-阳, 際-际, 難-难, 雲-云, 電-电, 頂-顶, 類-类, 預-预, 領-领, 額-额, 風-风, 飛-飞, 飲-饮, 飯-饭, 飼-饲, 館-馆, 馬-马, 魚-鱼, 鳥-鸟, 蕓-芸, 滬-沪 Different simplifications in both languages(Order: Kyūjitai / Traditional Chinese - Simplified Chinese - Shinjitai) 兩-两-両, 惡-恶-悪, 單-单-単, 嚴-严-厳, 傳-传-伝, 價-价-価, 兒-儿-児, 變-变-変, 圓-圆-円, 勞-劳-労, 壓-压-圧, 營-营-営, 團-团-団, 圖-图-図, 圍-围-囲, 賣-卖-売, 鹽-盐-塩, 處-处-処, 據-据-拠, 實-实-実, 專-专-専, 縣-县-県, 廣-广-広, 應-应-応, 歸-归-帰, 戰-战-戦, 擴-扩-拡, 擧-举-挙, 從-从-従, 戲-戏-戯, 對-对-対, 榮-荣-栄, 櫻-樱-桜, 檢-检-検, 樂-乐-楽, 樣-样-様, 權-权-権, 產-产-産, 氣-气-気, 濟-济-済, 齋-斋-斎, 滿-满-満, 帶-带-帯, 殼-壳-殻, 歷-历-歴, 莊-庄-荘, 歲-岁-歳, 肅-肃-粛, 龍-龙-竜, 龜-龟-亀, 靈-灵-霊, 麵-面-麺, 燒-烧-焼, 發-发-発, 顯-显-顕, 絲-丝-糸, 經-经-経, 繪-绘-絵, 續-续-続, 總-总-総, 練-练-練, 綠-绿-緑, 緣-缘-縁, 繩-绳-縄, 壞-坏-壊, 絕-绝-絶, 繼-继-継, 縱-纵-縦, 纖-纤-繊, 腦-脑-脳, 臟-脏-臓, 著-着-著, 藥-药-薬, 覺-觉-覚, 覽-览-覧, 頰-颊-頬, 觀-观-観, 譯-译-訳, 證-证-証, 讀-读-読, 說-说-説, 讓-让-譲, 豐-丰-豊, 贊-赞-賛, 轉-转-転, 輕-轻-軽, 邊-边-辺, 遞-递-逓, 遲-迟-遅, 鄕-乡-郷, 鐵-铁-鉄, 鑛/礦-矿-鉱, 錢-钱-銭, 鑒-鉴-鑑, 銳-锐-鋭, 錄-录-録, 藝-艺-芸, 鑄-铸-鋳, 鍊-炼-錬, 關-关-関, 險-险-険, 隱-隐-隠, 雜-杂-雑, 顏-颜-顔, 驛-驿-駅, 驅-驱-駆, 驗-验-験, 齒-齿-歯, 聽-听-聴, 廳-厅-庁, 擊-击-撃, 辯-辩-弁, 澀-涩-渋, 濾-滤-沪 Traditional characters that may cause problems displaying: Some of the traditional Kanji are not included in the Japanese font of Windows XP/2000, and only rectangles are shown. Downloading the Meiryo font from the Microsoft website (VistaFont_JPN.EXE) and installing it will solve this problem. Traditional characters that may cause problems displaying: Note that within the Jōyō Kanji there are 62 characters the old forms of which may cause problems displaying: Kyōiku Kanji (26): Grade 2 (2 Kanji): 海 社 Grade 3 (8 Kanji): 勉 暑 漢 神 福 練 者 都 Grade 4 (6 Kanji): 器 殺 祝 節 梅 類 Grade 5 (1 Kanji): 祖 Grade 6 (9 Kanji): 勤 穀 視 署 層 著 諸 難 朗Secondary-School Kanji (36): 欄 廊 虜 隆 塚 祥 侮 僧 免 卑 喝 嘆 塀 墨 悔 慨 憎 懲 敏 既 煮 碑 祉 祈 禍 突 繁 臭 褐 謁 謹 賓 贈 逸 響 頻These characters are Unicode CJK Unified Ideographs for which the old form (kyūjitai) and the new form (shinjitai) have been unified under the Unicode standard. Although the old and new forms are distinguished under the JIS X 0213 standard, the old forms map to Unicode CJK Compatibility Ideographs which are considered by Unicode to be canonically equivalent to the new forms and may not be distinguished by user agents. Therefore, depending on the user environment, it may not be possible to see the distinction between old and new forms of the characters. In particular, all Unicode normalization methods merge the old characters with the new ones. Different stroke orders in Chinese and Japanese: Some characters, whether simplified or not, look the same in Chinese and Japanese, but have different stroke orders. For example, in Japan, 必 is written with the top dot first, while the Traditional stroke order writes the 丿 first. In the characters 王 and 玉, the vertical stroke is the third stroke in Chinese, but the second stroke in Japanese. Different stroke orders in Chinese and Japanese: Taiwan, Hong Kong and Macau use Traditional characters, though with an altered stroke order.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ligelizumab** Ligelizumab: Ligelizumab (INN; development code QGE031) is a humanized IgG1 monoclonal antibody designed for the treatment of severe asthma and chronic spontaneous urticaria. It is an anti-IgE that binds to IGHE an acts as an immunomodulator.This drug was developed by Novartis Pharma AG. Research funded by Novartis Pharma concluded that Ligelizumab was more effective in treating chronic spontaneous urticaria than omalizumab or placebo.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Feminist psychology** Feminist psychology: Feminist psychology is a form of psychology centered on social structures and gender. Feminist psychology critiques historical psychological research as done from a male perspective with the view that males are the norm. Feminist psychology is oriented on the values and principles of feminism. Feminist psychology: Gender issues can be broken down into many different categories and can be rather controversial. They can include the way people identify their gender (for example: male, female, genderqueer; transgender or cisgender) and how they have been affected by societal structures relating to gender (gender hierarchy), the role of gender in the individual's life (such as stereotypical gender roles) and any other gender related issues. Feminist psychology: The main objective behind this field of study is to understand the individual within the larger social and political aspects of society. Feminist psychology places a strong emphasis on women's rights. Psychoanalysis took shape as a clinical or therapeutic method, feminism as a political strategy. History: Feminist psychoanalysis The term feminist psychology was originally coined by Karen Horney. In her book, Feminine Psychology, which is a collection of articles Horney wrote on the subject from 1922–1937, she addresses previously held beliefs about women, relationships, and the effect of society on female psychology. History: Functionalism, Darwinism and the psychology of women The beginning of psychology research presents very little information in the way of female psychology. Many women did not fight against oppression because they did not realize they were oppressed in the first place. Once the functionalist movement came about in the United States, academic psychology's study of sex difference and a prototypical psychology of woman were developed.https://psycnet.apa.org/journals/amp/30/7/739.pdf Anti-feminism after WWII In 1942 Edward Strecker made "mom-ism" an official pathological syndrome under the APA. Strecker believed that the country was under threat because mothers were not emotionally disconnecting from their children at a young enough age, and the matriarchy was making young men weak and losing their "man power". This fueled that anti-feminist movement; women were in need of psychotherapy to aid their mental illness and further prevent the spread of maternalism. The psychological damage on the family would be severe if a woman chose a career to satisfy her needs as opposed to her feminine domestic role assigned by society. A woman's happiness was not important, she must follow her role. The effect of women having independent thoughts and a thirst for exploring her options was a huge threat to gender, as it resulted in masculine women and feminized men, apparently confounding the nation's youth and dooming their future. Constantinople and Bem both agreed that men and women possess masculinity and femininity, and that having both is being psychologically androgynous and a cause to be psychologically fixed or evaluated. Gender research in the 1960s and 1970s: Esther Greenglass states that in 1972, the field of psychology was still male-dominated, women were totally excluded. The use of the word women in conjunction with psychology was forbidden, men refused to be excluded from the narrative. In her experience of teaching class, or being assistant professors, they had to phrase it in the interest of human beings or gender. Unger's paper "Toward a Redefinition of Sex and Gender" said that the use of gender showed the separation of biological and psychological sex. Psychology of women is feminist because it says women are different from men and that women's behavior cannot be understood outside of context. Feminists in turn compelled psychoanalysts to consider the implications of one of Freud's own, most uncompromising propositions: "that human beings consist of men and women and that this distinction is the most significant one that exists." In Liberating Minds: Consciousness-Raising as a Bridge Between Feminism and Psychology in 1970s Canada, Nora Ruck leads with, "U.S. radical feminist Irene Peslikis warned that equating women's liberating with individual therapy prevented women from truly understanding and fighting the roots of their oppression." Canada was one of the few countries with an academic category within psychology for feminism. They relied on CR (consciousness raising) groups to build their movement. Ruck describes the process of these CR groups by "bridging the tensions" between the personal and political. The development of CR as a political method in its own right is widely attributed to the New York-based radical feminist collective "Redstockings" (Echols, 1989). CR is also closely tied with radical feminism, which aims to weed out discrimination and segregation based on sex, and through a grassroots movement like socialist feminism, maintains that women's oppression is not a by-product of capitalist oppression but a "primary cause" (Koedt, 1968). Joining the workforce: Women were excluded from Freud's definition of mental health (the ability to love and to work) because women wanting jobs was attributed to a masculinity complex or envy of men. Between 1970 and 1980 the percentage of women working outside the home had risen from 43 to 51, in the United States. Although women reported having difficulty juggling the roles of mother and provider, they found a way to be fulfilled void of childbearing. Joining the workforce: Women continue to be a large percentage of the workforce in psychological positions. In 2013, 68.3 percent of the psychological positions in the United States of America were held by women, and as of 2019 it was 70 percent. This resulted in 2.1 women in the workforce for every 1 man, a drastic shift from Freud's previous school of thought on women in the workforce (APA, 2013). The workforce does consider semi-retired psychologists as well; however, women still overtake men when comparing active psychologists, and have less percentage than men for semi-retired and retired psychologists (APA, 2013). Joining the workforce: The Committee on Women in Psychology (CWP), was founded in 1973. It was founded with the mission " 'to advance psychology as a science and a profession...' — by ensuring that women in all their diversity achieve equality within the psychological community and in the larger society..." (APA, 2017). There are also journals that focus on women in psychology, such as SAGE, which is recognized by the APA (SAGE, 2017). SAGE journal publishes articles about the mental health of women in the workforce, and what it is like for single mothers in the country, all of which are common topics in feminism as it is (SAGE, 2017). These movements that have occurred over time show a clear shift in culture from Freud's original philosophy on mental health, where women are not only included, but are also part of every aspect of the workforce of psychology. The APA Leadership Institute for Women in Psychology emerged to support and empower women in psychological fields. Women such as Cynthia de las Fuentes are not only pushing for feminist psychology to be a more popular topic, but also do research in why some might be moving away from feminism, and by extension, feminism psychology (APA, 2006). Organizations: Association for Women in Psychology (AWP) The Association for Women in Psychology (AWP) was created in 1969 in response to the American Psychological Association's apparent lack of involvement in the Women's Liberation Movement. The organization formed with the purpose of fighting for and raising awareness of feminist issues within the field of psychology. The association focused its efforts toward feminist representation in the APA and finally succeeded in 1973 with the establishment of APA Division 35 (the Society for the Psychology of Women). Organizations: Society for the Psychology of Women APA Division 35, the Society for the Psychology of Women, was established in 1973. It was created to provide a place for all people interested in the psychology of women to access information and resources in the field. The society for the Psychology of Women works to incorporate feminist concerns into the teaching and practice of psychology. Division 35 also runs a number of committees, projects, and programs. Organizations: Section on Women and Psychology (SWAP) The Canadian Psychological Association (CPA) has a section on Women and Psychology (SWAP), which is meant "to advance the status of women in psychology, promote quity for women in general, and to educate psychologists and the public on topics relevant to women and girls." SWAP supports projects such as Psychology's Feminist Voices. The Journal of Diversity in Higher Education expresses that female psychologists are often considered to be inefficient due to their low contribution in scientific productivity. Hence, women tend to dominate in low level positions than their male counterparts even if they acquire their doctoral degrees. Organizations: "They did not show any acknowledgement or appreciation that there was a difference and that there was a need for it, and that was around the time that we were giving a course here interdisciplinary, not in psychology. I still didn't have a course here because they wouldn't let me do it. And the men pretty well called the shots when they told you, you can't do it, you just, you don't do it." (Greenglass, 2005). Organizations: The Psychology of Women Section (BPS) The Psychology of Women Section (BPS), of the British Psychological Society was created in 1988 to draw together everyone with an interest in the psychology of women, to provide a forum to support research, teaching and professional practice, and to raise an awareness of gender issues and gender inequality in psychology as profession and as practice. POWS is open to all members of the British Psychological Society. Current research: Emotion A major topic of study within feminist psychology is that of gender differences in emotion. In general, feminist psychologists view emotion as culturally controlled and state that the differences lie in the expression of emotion rather than the actual experience. The way a person shows his or her emotions is defined by socially enforced display rules which guide the acceptable forms of expression for particular people and feelings.Stereotypes of emotion view women as the more emotional sex. However, feminist psychologists point out that women are only viewed as experiencing passive emotions such as sadness, happiness, fear, and surprise more strongly. Conversely, men are viewed to most likely to express emotions of a more dominant nature, such as anger. Feminist psychologists believe that men and women are socialized throughout their lifetimes to view and express emotions differently. From infancy mothers use more facial expression when speaking to female babies and use more emotion words in conversation with them as they get older.Girls and boys are further socialized by peers where as girls are rewarded for being sensitive and emotional and boys are rewarded for dominance and lack of most emotional expression. Psychologists have also found that women, overall, are more skilled at decoding emotion using non-verbal cues. These signals include facial expression, tone of voice, and posture. Studies have shown gender differences in decoding ability beginning as early as age 3+1⁄2. The book Man and Woman, Boy and Girl looks at intersex patients in explaining why social factors are more important than biological factors in gender identity and gender roles and brought nature vs nurture issues back into the spotlight (Money & Ehrhardt, 1972). Current research: Leadership Social scientists in many disciplines study aspects of the "glass ceiling effect", the invisible yet powerful barriers that prevent many women from moving beyond a certain level in the workplace and other public institutions. According to the U.S. Department of Labor, women in the United States comprised 47% of the workforce in 2010. However, there are only a small number of women with high held positions in corporations. Women constitute only 5% of Fortune 500 CEOs (in 2014) and 19% of board members of S&P500 companies (in 2014), and 26% of college presidents. In 2017 U.S. government bodies, women comprise 19.1% of U.S. Representatives, 21% of U.S. Senators, 8% of state governors, and similarly low percentages of state elected officials. Women of color have lower representation than white women. The U.S. lags behind other countries in gender parity in government representation; according to the Global Gender Gap Report of 2014, the U.S. ranked 33 out of 49 so-called "high-income" countries, and 83rd out of the 137 countries surveyed. "Women affiliated with the American Academy of Psychoanalysis were among the first to pursue such subjects as women's fear of success and inclinations toward neurotic dependency. They acknowledged the cultural forces inhibiting women's progress in non-domestic realms, particularly the pressures inherent in a male-dominated society." Much scholarship focuses on structural features inhibiting women's progress in public spheres, rather than locating the source of the issue on women themselves. Current research: In addition, women experience a "sticky floor effect". The sticky floor effect happens when women have no job path or ladder to higher positions. When women have children, they experience a roadblock called the maternal wall, which is when women receive fewer desirable assignments and fewer opportunities for advancement after they have a child. The patriarchy labels women as "nourishing facilitators" making them not mentally strong enough to take part in the aggressive male-dominated workforce without taking psychological and emotional hits. When women begin working at a company, their advancement can be limited by not having a senior level employee taking an active role in the development and career planning of junior employees. There are a lack of female mentors to assist new female employees because there are fewer women than men in higher level company positions. A woman with a male mentor could experience difficulty in gaining bonding and advice from out of work experiences. This is because men play basketball or golf and typically exclude women from these endeavors. Other factors limiting leadership for women are cultural differences, stereotypes, and perceived threats. If women show a small amount of sensitivity, they are stereotyped as being overly emotional. Generally, employers do not accept sensitive, soft people as being able to tackle tough decisions or handle leadership roles. However, if a woman displays male traits she is portrayed as mean, butch, and aggressive. Women are viewed as less competent when they showcase "non-feminine" traits and are not taken seriously. These women don't brag about their accomplishments and feel guilty for being able to go beyond stereotypes of feminine emotion and thought in order to become masculine in their jobs, just to be successful or try and be equal to men. Career women, whose professional status depends on the appropriation of masculine traits, frequently suffer from depression. Recent research has connected the concept of stereotype threat with girls' motivations to avoid success as an individual difference, girls might avoid participation in certain male-dominated fields due to real and perceived obstacles to success in those fields, although there is little that can be proven (e.g., Spencer et al. 1999). Current research: Another factor leading to discrimination and stress are cultural differences between managers and workers. For example, if a manager is white and has an employee of color, stress may be created if they do not understand or respect each other. Without trust and respect, advancement is unlikely. Our depiction of gender identity is white and middle class. White women are described as intelligent, manipulative, and privileged by Black women, who are described as strong, determined, and having attitude (Burack, 2002). "There it is, White fear of Black anger", was written in Ladies Home Journal (Edwards 1998: 77). Regarding perceived threats at work, it is not a matter of sexual harassment or harassment in general. The threat is the fact that women could possibly take over. The more women working in a place of employment, the increased threat a man feels over job security. In a study of 126 male managers, when asked to estimate the number of women working at their place of employment and whether or not they felt men were disadvantaged. Men who believed there were many women felt threatened about the security of their job (Beaton et al., 1996). Alice Eagly and Blair Johnson (1990) discovered that men and women have different small differences in their styles of leadership. Women in power were seen as interpersonal and more democratic, whereas men were seen as task-oriented and more autocratic. In reality, men and women are equally effective in their styles of leadership. A study by Alice Eagly (Eagly, Karau, & Makhijani, 1995) found no overall differences in the effectiveness of male and female leaders in facilitating accomplishment of their group goals. Current research: Violence Feminists argue that gender-based violence occurs frequently in the forms of domestic violence, sexual harassment, childhood sexual abuse, sexual assault, and rape. Violence towards women can be physical or psychological and is not limited by race, economic status, age, ethnicity, or location. Women can be abused by strangers but most often the abuser is someone the woman knows. Violence can have both short- and long-term effects on women, and they react to the abuse in various ways. Some women express emotions such as fear, anxiety, and anger. Others choose to deny it occurred and conceal their feelings. Often, women blame themselves for what happened and try to justify that they somehow deserved it. Among victims of violence, psychological disorders such as post traumatic stress disorder and depression are common. In addition to the psychological ramifications, many women also sustain physical injuries from the violence that require medical attention. Current research: Relational-cultural theory Relational-cultural theory is based on the work of Jean Baker Miller, whose book Toward a New Psychology of Women proposes that "growth-fostering relationships are a central human necessity and that disconnections are the source of psychological problems." Inspired by Betty Friedan's Feminine Mystique, and other feminist classics from the 1960s, relational-cultural theory proposes that "isolation is one of the most damaging human experiences and is best treated by reconnecting with other people", and that therapists should "foster an atmosphere of empathy and acceptance for the patient, even at the cost of the therapist's neutrality". The theory is based on clinical observations and sought to prove that "there was nothing wrong with women, but rather with the way modern culture viewed them". Current research: Transnational Feminist Psychology In 2008, Arnett pointed out that most articles in American Psychological Association journals were about US populations when U.S. citizens are only 5% of the world's population. He complained that psychologists had no basis for assuming psychological processes to be universal and generalizing research findings to the rest of the global population. In 2010, Henrich, Heine, and Norenzayan reported a systemic bias in conducting psychology studies with participants from WEIRD ("western, educated, industrialized, rich and democratic") societies. Although only 1/8 people worldwide live in regions that fall into the WEIRD classification, the researchers claimed that 60–90% of psychology studies are performed on participants from these areas. Arnett (2008), Altmaier and Hall (2008), and Morgan-Consoli et al. (2018) saw the Western bias in research and theory as a serious problem considering psychologists are increasingly applying psychological principles developed in W.E.I.R.D. regions in their research, clinical work, and consultation with populations around the world.Kurtis, Adams, Grabe, and Else-Quest coined the term transnational feminist psychology (also called transnational psychology). The term refers to an approach that applies the principles of transnational feminism, developed through interdisciplinary work in postcolonial and feminist studies, to the field of psychology to study, understand, and address the impact of colonization, imperialism, migration, and globalization on women around the world. Kurtis and Adams proposed using these principles and a context-sensitive cultural psychology lens to reconsider, de-naturalize, and de-universalize psychological science. Grabe and Else-Quest also proposed the concept of "transnational intersectionality" that expands current conceptions of intersectionality, adding global forces to the analysis of how oppressive institutions are interconnected. Kurtis and Adams emphasized that people in the non-Western, "Majority World" (areas where the majority of the world's population lives) are important resources who can help counter Western biases and revise current theory to develop a more pluralistic psychological science. In 2015 a Summit was organized by Machizawa, Collins, and Rice to further develop "transnational psychology." Participants applied transnational psychological perspectives to research, assessment, interventions, migration, domestic violence, education, career, human trafficking, sexuality, pedagogy, and other topics in psychology. Feminist therapy: Feminist therapy is a type of therapy based on viewing individuals within their sociocultural context. The main idea behind this therapy is that the psychological problems of women and minorities are often a symptom of larger problems in the social structure in which they live. There is a general agreement that women are more frequently diagnosed with internalizing disorders such as depression, anxiety, and eating disorders than men. Feminist therapists dispute earlier theories that this is a result of psychological weakness in women and instead view it as a result of encountering more stress because of sexist practices in our culture. A common misconception is that feminist therapists are only concerned with the mental health of women. While this is certainly a central component of feminist theory, feminist therapists are also sensitive to the impact of gender roles on individuals regardless of sex. Feminist therapy: Goldman found the connection between psychoanalysis and feminism as the recognition of sexuality as preeminent in the makeup of women as well as men. Freud found that men's ideology was forced onto women in order to sexually repress them, connecting the public and private spheres for the subjugation of women. The goal of feminist therapy is the empowerment of the client. Generally, therapists avoid giving specific diagnoses or labels and instead focus on problems within the context of living in a sexist culture. Clients are sometimes trained to be more assertive and encouraged to understand their problems with the intent of changing or challenging their circumstances. Feminist therapists view lack of power as a major issue in the psychology of women and minorities. Accordingly, the client-therapist relationship is meant to be as egalitarian as possible with both sides communicating on equal ground and sharing experiences.Feminist therapy is different from other types of therapy in that it goes beyond the idea that men and women should be treated equally in the therapeutic relationship. Feminist therapy incorporates political values to a greater extent than many other types of therapy. Also, feminist therapy encourages social change as well as personal change in order to improve the psychological state of the client and society. Feminist therapy: Issues with traditional therapies Gender biases Many traditional therapies assume that women should follow sex-roles in order to be mentally healthy. They believe gender differences are biologically based and encourage female clients to be submissive, expressive, and nurturant in order to achieve fulfillment Psychotherapy is a male-dominated practice and supports women's adjustment to stereotypical gender roles instead of women's liberation. This may be done unconsciously by the therapist – for example, they may encourage a female to be a nurse, when they would have encouraged a male client of the same abilities to be a doctor, but there is the risk that the goals and outcomes of therapy will be evaluated differently in accordance with the therapist's beliefs and values. Inequality between the sexes and restrictions on sex roles are perpetuated by evolutionary psychology, but we could understand the role of gender in scientific communities by using feminist research strategies and admitting to gender bias (Fehr, 2012). Feminist therapy: Androcentrism Traditional therapies are based on the assumption that being male is the norm. Male traits are seen as the default, and stereotypical male traits are seen as more highly valued. Men are considered the standard of comparison when comparing gender differences, with feminine traits viewed as a deviation from the norm and a deficiency on the part of women. Psychological theories of female development were written by men who are completely uninformed by women's actual experiences and the conditions under which they lived. Feminist therapy: Intrapsychic assumptions Traditional therapies place little emphasis on sociopolitical influences, focusing instead on the client's internal functioning. This can lead therapists to blame clients for their symptoms, even if the client may in fact be coping admirably in a difficult and oppressive situation. Another possible issue can arise if therapists pathologize normal responses to oppressive environments. Feminist therapy: Principles of empowerment The personal is political This principle stems from the belief that psychological symptoms are caused by the environment. The goal of the therapist is to separate the external from the internal so the client can become aware of the socialization and oppression they have experienced, and attribute their problems to the appropriate causes. Feminist stance is largely marginalized and seen as standing outside of mainstream psychiatry, and there is the power-based distribution of knowledge, which gives therapists the ability to label women's disorders without knowing their lived experiences.Therapists do not view their client's cognition or behaviors as maladaptive – indeed, symptoms of depression or post traumatic stress disorder are often considered to be the normal, rational response to oppression and discrimination. Traditional therapies place little emphasis on sociopolitical influences, focusing instead on the client's internal functioning. This can lead therapists to blame clients for their symptoms, even if the client may in fact be coping admirably in a difficult and oppressive situation. Another possible issue can arise if therapists pathologize normal responses to oppressive environments. Feminist therapy: Egalitarian relationships Feminist therapists consider power inequalities to be a major contributing factor to the struggles of women, and as such criticize the traditional therapist role as an authority figure. Feminist therapists believe interpersonal relationships should be based in equality, and view the client as the "expert" in their own experiences. Therapists emphasize collaboration, and use techniques such as self-disclosure to reduce the power differential. Feminist therapy: Value the female perspective The goal of feminist therapy is to re-value feminine characteristics and perspectives. Often, women are criticized for breaking gender norms while simultaneously being devalued for acting feminine. In order to break this double bind, therapists encourage women to value the female perspective and self-define themselves and their roles. In doing so, clients can value their own characteristics, bond with other women, and embrace traits that had previously been discouraged. Feminist therapy: Techniques Sex role analysis One component of feminist therapy involves a critique of cultural conditioning that produces and maintains socially biased structures. From birth, women are taught which behaviors are appropriate, and face sanctions if they fail to conform to these standards. These gender stereotypes are taught explicitly or implicitly by the family, media, school, and the workplace, and lead to gender-related belief systems and self-imposed expectations.Before women can be free of these expectations, they need to gain an understanding of the social systems that molded and encouraged these gender stereotypes, and how this system impacted their mental health. First, women work to identify the gendered messages they've received, as well as the consequences. Then, women explore how these messages have been internalized, and decide which rules they would like to follow and which behaviors they would prefer to change. Feminist therapy: Power analysis Power systems are organized groups that have legitimized status, that are sanctioned by custom or law, that have the power to set the standards for society. In Western society, women are expected to conform to the power systems that place them as submissive and inferior to men. Types of power include the legal, physical, financial, and institutional ability to exert change. Often, men control direct power via concrete resources, while women are left to use indirect means and interpersonal resources. Also, sex-roles and institutionalized sexism play a role in limiting the power women have.Power analysis is the technique used to examine the power differential between women and men, and to empower women to challenge the interpersonal and institutional inequalities they face. Feminist therapy: Assertiveness training Assertiveness has traditionally been associated with masculinity, which may have influenced women feeling the need to be more passive in their interactions with others. Feminist therapists work to help women distinguish assertive behaviors from passive or aggressive ones, overcome beliefs that tell women they cannot be assertive, and help women rehearse assertiveness skills through role play. Studies on the effects of assertiveness training on women have shown increases in self-esteem and confidence after training was complete. Feminist therapy: Application to other theories Cognitive-behavioral therapy The biggest feminist critique of cognitive-behavioral therapy is that the theory fails to focus on how behaviors are learned from society (NetCE, 2014). Often, the focus is on encouraging women to change their "maladaptive" responses and conform to normative standards. By putting the onus on the woman to change her thoughts and behaviors, instead of changing the environmental factors that give rise to the problems, the theory fails to question the social norms that condone the oppression of women. Despite this, feminist therapists do use cognitive-behavioral techniques to help women change their beliefs and behaviors, in particular using techniques such as sex-role analysis or assertiveness training (NetCE, 2014). Feminist therapy: Psychoanalytic therapy Many psychoanalytic concepts are considered by feminist therapists to be sexist and culturally-bound (NetCE, 2014). However, feminist psychoanalysis adapts many of the ideas of traditional psychotherapy, including the focus on early childhood experiences and the idea of transference. Specifically, therapists serve as a mother figure and help clients connect emotionally with others while maintaining an individuated sense of self (NetCE, 2014). Feminist therapy: Family systems therapy The main critique of family systems therapy is the endorsement of power imbalances and traditional gender roles. For example, family systems therapists often respond to men and women differently, for example placing more importance on the man's career or placing the responsibility for childcare and housework on the mother (Braverman, 1988). Feminist therapists strive to make the discussion of gender roles explicit in therapy, as well as focusing on the needs of and empowering the woman in her relationship (Braverman, 1988). Therapists help couples examine how gender role beliefs and power dynamics lead to conflict. The focus is on encouraging more egalitarian relationships and affirming the women's experiences (NetCE, 2014). Feminist therapy: Core issues covered in therapy Rape/domestic violence A feminist approach to dealing with rape or domestic abuse is focused on empowerment. Therapists help clients analyze societal messages about rape or domestic abuse that encourage a victim-blaming attitude, and try to help clients get past shame, guilt, and self-blame. Often, women do not know the true definitions of abuse or rape, and don't immediately identify themselves as victims.Survivors often face negative reactions from others that lead to re-victimization when trying to seek help, so therapists can help the woman navigate the medical and legal services if she wishes. At all times, although safety is the main concern, the therapist empowers the woman to explore her options and make her own decisions (for example, to leave the relationship or stay following an attack).It is emphasized that any symptoms are in fact normal responses to the traumatic effect, and the women is not pathologized. Both rape and domestic violence are not viewed as something one can recover from, but are instead viewed as experiences that one can integrate into one's life story as one restructures one's self-esteem and self-confidence. Feminist therapy: Career counseling Occupational choice is a main theme in feminist counseling. Women are more likely to earn less than men, and are overrepresented in lower-status occupations. Several factors influence this career trajectory, including gender-role stereotyping of which jobs are appropriate for men and women. Women are often pointed towards nurturing jobs, while leadership jobs are reserved for men.Institutionalized sexism in the educational system often encourages girls to study traditionally feminine subjects while discouraging them from studying math and science. Discriminatory hiring practices also reflect the attitude that men should be the breadwinner and women are a riskier choice because their work will be disrupted once they have children.These societal messages often lead to internalized negative messages, including lower self-confidence and self-esteem, lower levels of assertiveness and willingness to negotiate, and the impostor syndrome, where women believe they do not deserve success and are merely lucky.When women do seek nontraditional employment, they are placed in a double bind, where they are expected to be competent at their job while simultaneously being feminine. Especially for women in male-dominated fields, trying to be competent and successful as a woman is difficult. Feminist therapists: Feminist therapists work with women in search of counseling, as well as men, for help in alleviating a variety of mental health concerns. Feminist therapists have an interest in gender and how multiple social identities can impact an individual's functioning. Psychologists or therapists who identify with the feminism, the belief that women and men are equals, and/or feminist psychological theory may call themselves feminist therapists. Currently, there are not many postdoctoral training programs in feminist psychology, but models for this training are being developed and modified for institutions to start offering them. Most of this training is modeled around gender-fair counseling techniques.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermal ionization** Thermal ionization: Thermal ionization, also known as surface ionization or contact ionization, is a physical process whereby the atoms are desorbed from a hot surface, and in the process are ionized. Thermal ionization is used to make simple ion sources, for mass spectrometry and for generating ion beams. Thermal ionization has seen extensive use in determining atomic weights, in addition to being used in many geological/nuclear applications. Physics: The likelihood of ionization is a function of the filament temperature, the work function of the filament substrate and the ionization energy of the element. Physics: This is summarised in the Saha-Langmuir equation: exp ⁡(W−ΔEIkT) where n+n0 = ratio of ion number density to neutral number density g+g0 = ratio of statistical weights (degeneracy) of ionic (g_+) and neutral (g_0) states W = work function of surface ΔEI = ionization energy of desorbed element k = Boltzmann's constant T = surface temperatureNegative ionization can also occur for elements with a large electron affinity ΔEA against a surface of low work function. Thermal ionization mass spectrometry: One application of thermal ionization is thermal ionization mass spectrometry (TIMS). In thermal ionization mass spectrometry, a chemically purified material is placed onto a filament which is then heated to high temperatures to cause some of the material to be ionized as it is thermally desorbed (boiled off) the hot filament. Filaments are generally flat pieces of metal around 1–2 mm (0.039–0.079 in) wide, 0.1 mm (0.0039 in) thick, bent into an upside-down U shape and attached to two contacts that supply a current. Thermal ionization mass spectrometry: This method is widely used in radiometric dating, where the sample is ionized under vacuum. The ions being produced at the filament are focused into an ion beam and then passed through a magnetic field to separate them by mass. The relative abundances of different isotopes can then be measured, yielding isotope ratios. Thermal ionization mass spectrometry: When these isotope ratios are measured by TIMS, mass-dependent fractionation occurs as species are emitted by the hot filament. Fractionation occurs due to the excitation of the sample and therefore must be corrected for accurate measurement of the isotope ratio.There are several advantages of the TIMS method. It has a simple design, is less expensive than other mass spectrometers, and produces stable ion emissions. It requires a stable power supply, and is suitable for species with a low ionization energy, such as strontium and lead. Thermal ionization mass spectrometry: The disadvantages of this method stem from the maximum temperature achieved in thermal ionization. The hot filament reaches a temperature of less than 2,500 °C (2,770 K; 4,530 °F), leading to the inability to create atomic ions of species with a high ionization energy, such as osmium and tungsten. Although the TIMS method can create molecular ions instead in this case, species with high ionization energy can be analyzed more effectively with MC-ICP-MS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Offset dish antenna** Offset dish antenna: An offset dish antenna or off-axis dish antenna is a type of parabolic antenna. It is so called because the antenna feed is offset to the side of the reflector, in contrast to the common "front-feed" parabolic antenna where the feed antenna is suspended in front of the dish, on its axis. As in a front-fed parabolic dish, the feed is located at the focal point of the reflector, but the reflector is an asymmetric segment of a paraboloid, so the focus is located to the side. Offset dish antenna: The purpose of this design is to move the feed antenna and its supports out of the path of the incoming radio waves. In an ordinary front-fed dish antenna, the feed structure and its supports are located in the path of the incoming beam of radio waves, partially obstructing them, casting a "shadow" on the dish, reducing the radio power received. In technical terms this reduces the aperture efficiency of the antenna, reducing its gain. In the offset design, the feed is positioned outside the area of the beam, usually below it on a boom sticking out from the bottom edge of the dish. The beam axis of the antenna, the axis of the incoming or outgoing radio waves, is skewed at an angle to the plane of the dish mouth. The design is most widely used for small parabolic antennas or "mini-dishes", such as common Ku band home satellite television dishes, where the feed structure is large enough in relation to the dish to block a significant proportion of the signal. Another application is on satellites, particularly the direct broadcast satellites which use parabolic dishes to beam television signals to homes on Earth. Because of the limited transmitter power provided by their solar cells, satellite antennas must function as efficiently as possible. The offset design is also widely used in radar antennas. These must collect as much signal as possible in order to detect faint return signals from faraway targets. Offset dish antenna: Offset dish antennas are more difficult to design than front-fed antennas because the dish is an asymmetric segment of a paraboloid with different curvatures in the two axes. Before the 1970s offset designs were mostly limited to radar antennas, which required asymmetric reflectors anyway to create shaped beams. The advent in the 1970s of computer design tools which could easily calculate the radiation pattern of offset dishes has removed this limitation, and efficient offset designs are being used more and more widely in recent years.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sparkle 3 Genesis** Sparkle 3 Genesis: Sparkle 3 Genesis (stylized as The Sparkle³ genesis) is an arcade video game. It is the second title in Forever Entertainment's Sparkle series of video games and the successor of the 2011 video game Sparkle 2 Evo and the predecessor of the 2016 video game Sparkle Zero. Development: Sparkle 3 Genesis was developed by Madman Theory Games in collaboration with Plastic Games and published by Polish video game development studio Forever Entertainment on April 24, 2015 for Microsoft Windows, macOS, and Linux. The game was ported to Nintendo Switch and released on March 15, 2018 in the west and on September 26, 2019 in Japan. Gameplay: The gameplay of Sparkle 3 Genesis is very similar to that of its predecessor. In the game, the player controls the title creature, a microorganism as it swims through something fluid. It can eat food which influences how it will evolve and attack enemies with its teeth. The more it eats, the farther it will evolve. The goal of the game is to involve as far as possible. New compared to its predecessor are missions the player can do. Reception: Sparkle 3 Genesis received average reviews, criticizing its simplicity and praising its graphics.German online video game magazine ntower gave the game 5 out of 10 points, and wrote: "[...] Das Gameplay ist extrem simpel gehalten, im Gegensatz zum Vorgänger gibt es jedoch immerhin kleinere Missionen. Visuell ist Sparkle 3 Genesis recht ansprechend und dient vor allem Dingen einem Zweck: Sich zu entspannen. [...]" (" [...] The gameplay is extremely simple, however, in contrast to the predecessor, there are at least small missions. Visually, Sparkle 3 Genesis is quite appealing and serves one purpose above all: to relax. [...]").Online video game magazine switchplayer.net gave the game 2.5 out of 5 stars, and wrote: "[The game] is a kind of contemplative experience that puts you in control of an ever-evolving creature. Unfortunately, it lacks variety and sounds more like a mini-game stretched out to become longer and, consequently, tedious."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Food Creatures** Food Creatures: Wrigley's Food Creatures are little cute characters that appear in Wrigley's Orbit, Extra, Freedent and Excel advertisements. The advertising campaign started in 2007. List: Here is a list of all of the 24 Food Creatures : Doughnut Garlic Onion Coffee Cigarette Banana Pizza Cookie Soft Drink Pop-Corn Chocolate Raspberry Sushi Sandwich Chip Bag Wine Toast Tea Salsa Cereal Box Broccoli Chicken Leg Pepper Sausage
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Link 4** Link 4: Link 4 is a non-secure data link used for providing vector commands to USAF and other NATO fighter aircraft. It is a netted, time division link operating in the UHF band at 5,000 bits per second. There are 2 separate "Link 4s": Link 4A and Link 4C. Link 4A TADIL C is one of several Tactical Data Links now in operation in the United States Armed Services and forces of the North Atlantic Treaty Organization (NATO). Link-4A plays an important role by providing digital surface-to-air, air-to-surface, and air-to-air tactical communications. Originally designated Link-4, this link was designed to replace voice communications for the control of tactical aircraft. The use of Link-4 has since been expanded to include communication of digital data between surface and airborne platforms. First installed in the late 1950s, Link-4A has achieved a reputation for being reliable. But Link-4A's transmissions are not secure, nor are they jam-resistant. However, Link-4A is easy to operate and maintain without serious or long-term connectivity problems. Link 4C is a fighter-to-fighter data link which is intended to complement Link 4A although the two links do not communicate directly with each other. Link 4C uses F-series messages and provides some measure of ECM resistance. Link 4C is fitted to the F-14 only and the F-14 cannot communicate on Link 4A and 4C simultaneously. Up to 4 fighters may participate in a single Link 4C net. It is planned that Link 16 will assume Link 4A's role in AIC and ATC operations and Link 4C's role in fighter-to-fighter operations. However Link 16 is not currently capable of replacing Link 4A's ACLS function and it is likely that controlled aircraft will remain equipped with Link 4A to perform carrier landings. Message standards are defined in STANAG 5504 while standard operating procedures are laid down in ADatP 4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cliff stabilization** Cliff stabilization: Cliff stabilization is a coastal management erosion control technique. This is most suitable for softer or less stable cliffs. Generally speaking, the cliffs are stabilised through dewatering (drainage of excess rainwater to reduce water-logging) or anchoring (the use of terracing, planting, wiring or concrete supports to hold cliffs in place).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vocal harmony** Vocal harmony: Vocal harmony is a style of vocal music in which a consonant note or notes are simultaneously sung as a main melody in a predominantly homophonic texture. Vocal harmonies are used in many subgenres of European art music, including Classical choral music and opera and in the popular styles from many Western cultures ranging from folk songs and musical theater pieces to rock ballads. In the simplest style of vocal harmony, the main vocal melody is supported by a single backup vocal line, either at a pitch which is above or below the main vocal line, often in thirds or sixths which fit in with the chord progression used in the song. In more complex vocal harmony arrangements, different backup singers may sing two or even three other notes at the same time as each of the main melody notes, mostly with a consonant, pleasing-sounding thirds, sixths, and fifths (although dissonant notes may be used as short passing notes). In art music: Vocal harmonies have been an important part of Western art music since the Renaissance-era introduction of Mass melodies harmonized in sweet thirds and sixths. With the rise of the Lutheran church's chorale hymn singing style, congregations sang hymns arranged with four or five-part vocal harmony. In the Romantic era of music during the 1800s, vocal harmonization became more complex, and arrangers began including more dissonant harmonies. Operas and choral music from the Romantic era used tense-sounding vocal harmonies with augmented and diminished intervals as an important tool for underscoring the drama of the music. With contemporary music from the 1900s and 2000s, composers made increasingly difficult demands on choirs which were singing in vocal harmony, such as instructions to sing microtonal notes or make percussive sounds. In popular music: To sing vocal harmony in a pop or rock context, backup singers need to be able to adjust the pitch of their notes so that they are in tune with the pitch of the lead vocalist and the band's instruments. As well, the rhythm of the backup harmony parts has to be in time with the lead singer and the rhythm section. While some bands use relatively simple harmony vocals, with long, slow-moving vocal harmony notes supporting the vocal lead during the chorus sections, other bands make the backup singers into more equal partners of the main vocalist. In more vocally oriented bands, backup singers may have to sing complex parts which demand a vocal agility and sensitivity equal to that of the main vocal line. Usually, pop and rock bands use harmony vocals while the rest of the band is playing; however, as an effect, some rock and pop harmony vocals are done a cappella, without instrumental accompaniment. This device became widely used in the end chorus section of 1980s and 1990s-era hard rock and heavy metal ballads as well as horror punk (which cites influence from both heavy metal and doo-wop). In popular music: Other roles While some bands use backup singers who only sing when they are on stage, it is common for backup singers to have other roles while they are on stage. In many rock and metal bands, the musicians doing backup vocals also play instruments, such as keyboards, rhythm guitar or drums. In Latin or Afro-Cuban groups, backup singers may play percussion instruments or shakers while singing. In some pop and hip-hop groups and in musical theater, the backup singers may be required to perform elaborately choreographed dance routines while they sing through headset microphones. In popular music: Barbershop quartets One of the more complex styles of vocal harmony is the barbershop quartet style, in which the melody is harmonized in four parts. In a barbershop quartet arrangement, each voice has its own role: generally, the lead sings the melody, the tenor harmonizes above the melody, the bass sings the lowest harmonizing notes, and the baritone completes the chord, usually below the lead. The melody is not usually sung by the tenor or bass. Barbershop quartets are more likely to use dissonant and "tense"-sounding dominant seventh chords than pop or rock bands. In popular music: Doo-wop groups Doo-wop is a style of vocal-based rhythm and blues music, which developed in African-American communities in the 1940s and which achieved mainstream popularity in the US both in the 1950s to the early 1960s. It used smooth, consonant vocal harmonies, with a number of singers imitating instruments while singing nonsense syllables. For example, in The Ravens' song "Count Every Star" (1950), the singers imitate the "doomph", "doomph" plucking-sound of a double bass. Well-known hits include "In the Still of the Night (I Remember)" by The Five Satins and "Get a Job" by The Silhouettes, a hit in 1958. Doo-wop remained popular until just before the British Invasion of 1964.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pseudoneglect** Pseudoneglect: The term pseudoneglect refers to the natural tendency of shifting spatial attention to the left. The concept was introduced and evidenced by experimental findings regarding the line bisection task. In this task, participants are supposed to mark the middle of a horizontal line. On average, their deviations from the actual center of the line tend to be more to the left than to the right. In other visuo-spatial tasks, a similar bias to the left hemifield is apparent. Pseudoneglect shows similarities to impairments in patients with a medial condition called hemispatial neglect. However, the effects of pseudoneglect are marginal and mainly restricted to experimental settings in scientific labs.In 2020, archaeologists of the Collaborative Research Center CRC 1266 at Kiel University succeeded in proving this behavior in prehistory. In the Linear Pottery, so-called long houses were built. Near Vráble, in the southwest of Slovakia, three comparatively large settlement agglomerations with hundreds of such longhouses are located in direct proximity to each other. If a new house is built next to an old one, it was oriented to the existing house. The settlements in Vráble existed for about 300 years as determined by the 14C data. During this period, a progressive counterclockwise rotation of the orientation of the houses from about 32° to 4° was observed. An examination of published and dated village plans from other Linear Pottery regions confirms that this counterclockwise rotation is a general Central European trend. This shows that whenever the houses were to be oriented in a certain direction and parallel to each other, there was a perceptual error that caused a slight counterclockwise rotation. The researchers explain this as an unconscious but systematic leftward bias, as it defines the term pseudoneglect. Pseudoneglect: Pseudoneglect has been reported for left-handed as well as right-handed people. The bias to the left side is more pronounced in right-handers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Human Technology** Human Technology: Human Technology is an open-access peer-reviewed academic journal focusing on the interaction between people and technology. As of September 2021, the journal is published by the Centre of Sociological Research in Szczecin, Poland. Previously, the journal was co-published by the Agora Center and the University of Jyväskylä (2005-2016), and then by the Open Science Centre and the University of Jyväskylä (2017-2021). Initially, the journal published biannually; it has been published three times a year since 2018. Editors: Kristiina Korjonen-Kuusipuro (current) Adam Wojciechowski (current) Jukka Jouhki (2018-2021) Pertti Hurme (2015-2017) Päivi Häkkinen (2012-2014) Pertti Saariluoma (2005-2011), founding editor Indexing: Human Technology is listed in the Directory of Open Access Journals (Lund University Libraries), and is cited and/or abstracted in various databases, including: Scopus, PsycINFO (American Psychological Association), Ebsco and ProQuest.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World's funniest joke** World's funniest joke: The "world's funniest joke" is a term used by Richard Wiseman of the University of Hertfordshire in 2002 to summarize one of the results of his research. For his experiment, named LaughLab, he created a website where people could rate and submit jokes. Purposes of the research included discovering the joke that had the widest appeal and understanding among different cultures, demographics and countries.The History Channel eventually hosted a special on the subject. The jokes: The winning joke, which was later found to be based on a 1951 Goon Show sketch by Spike Milligan, was submitted by Gurpal Gosal of Manchester: Two hunters are out in the woods when one of them collapses. He doesn't seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, "My friend is dead! What can I do?" The operator says, "Calm down. I can help. First, let's make sure he's dead." There is a silence; then a gun shot is heard. Back on the phone, the guy says, "OK, now what?" Other findings: Researchers also included five computer-generated jokes, four of which fared rather poorly, but one was rated higher than one third of the human jokes: What kind of murderer has moral fiber? A cereal killer. The joke that was submitted to LaughLab the most times was: What's brown and sticky? A stick.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CLSTN1** CLSTN1: Calsyntenin-1 is a protein that in humans is encoded by the CLSTN1 gene. Clinical relevance: Mutations in this gene have been shown associated to pathogenic mechanisms of Alzheimer's disease. Interactions: CLSTN1 has been shown to interact with APBA2 and Amyloid precursor protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Software quality** Software quality: In the context of software engineering, software quality refers to two related but distinct notions: Software's functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product. It is the degree to which the correct software was produced. Software quality: Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability. It has a lot more to do with the degree to which the software works as needed.Many aspects of structural quality can be evaluated only statically through the analysis of the software inner structure, its source code (see Software metrics), at the unit level, and at the system level (sometimes referred to as end-to-end testing), which is in effect how its architecture adheres to sound principles of software architecture outlined in a paper on the topic by Object Management Group (OMG).However some structural qualities, such as usability, can be assessed only dynamically (users or others acting on their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test).Functional quality is typically assessed dynamically but it is also possible to use static tests (such as software reviews).Historically, the structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126 and the subsequent ISO/IEC 25000 standard. Based on these models (see Models), the Consortium for IT Software Quality (CISQ) has defined five major desirable structural characteristics needed for a piece of software to provide business value: Reliability, Efficiency, Security, Maintainability and (adequate) Size.Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements. Such programming errors found at the system level represent up to 90 percent of production issues, whilst at the unit-level, even if far more numerous, programming errors account for less than 10 percent of production issues (see also Ninety–ninety rule). As a consequence, code quality without the context of the whole system, as W. Edwards Deming described it, has limited value.To view, explore, analyze, and communicate software quality measurements, concepts and techniques of information visualization provide visual, interactive means useful, in particular, if several software quality measures have to be related to each other or to components of a software or system. For example, software maps represent a specialized approach that "can express and combine information about software development, software quality, and system dynamics".Software quality also plays a role in the release phase of a software project. Specifically, the quality and establishment of the release processes (also patch processes), configuration management are important parts of an overall software engineering process. Motivation: Software quality is motivated by at least two main perspectives: Risk management: Software failure has caused more than inconvenience. Software errors can cause human fatalities (see for example: List of software bugs). The causes have ranged from poorly designed user interfaces to direct programming errors, see for example Boeing 737 case or Unintended acceleration cases or Therac-25 cases. This resulted in requirements for the development of some types of software, particularly and historically for software embedded in medical and other devices that regulate critical infrastructures: "[Engineers who write embedded software] see Java programs stalling for one third of a second to perform garbage collection and update the user interface, and they envision airplanes falling out of the sky.". In the United States, within the Federal Aviation Administration (FAA), the FAA Aircraft Certification Service provides software programs, policy, guidance and training, focus on software and Complex Electronic Hardware that has an effect on the airborne product (a "product" is an aircraft, an engine, or a propeller). Certification standards such as DO-178C, ISO 26262, IEC 62304, etc. provide guidance. Motivation: Cost management: As in any other fields of engineering, a software product or service governed by good software quality costs less to maintain, is easier to understand and can change more cost-effective in response to pressing business needs. Industry data demonstrate that poor application structural quality in core business applications (such as enterprise resource planning (ERP), customer relationship management (CRM) or large transaction processing systems in financial services) results in cost, schedule overruns and creates waste in the form of rework (see Muda (Japanese term)). Moreover, poor structural quality is strongly correlated with high-impact business disruptions due to corrupted data, application outages, security breaches, and performance problems.CISQ reports on the cost of poor quality estimates an impact of: $2.08 trillion in 2020 $2.84 trillion in 2018 IBM's Cost of a Data Breach Report 2020 estimates that the average global costs of a data breach:$3.86 million Definitions: ISO Software quality is "capability of a software product to conform to requirements." while for others it can be synonymous with customer- or value-creation or even defect level. ASQ ASQ uses the following definition: Software quality describes the desirable attributes of software products. There are two main approaches exist: defect management and quality attributes. Definitions: NIST Software Assurance (SA) covers both the property and the process to achieve it: [Justifiable] confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures PMI The Project Management Institute's PMBOK Guide "Software Extension" defines not "Software quality" itself, but Software Quality Assurance (SQA) as "a continuous process that audits other software processes to ensure that those processes are being followed (includes for example a software quality management plan)." whereas Software Quality Control (SCQ) means "taking care of applying methods, tools, techniques to ensure satisfaction of the work products towards quality requirements for a software under development or modification." Other general and historic The first definition of quality history remembers is from Shewhart in the beginning of 20th century: "There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality."Kitchenham and Pfleeger, further reporting the teachings of David Garvin, identify five different perspectives on quality: The transcendental perspective deals with the metaphysical aspect of quality. In this view of quality, it is "something toward which we strive as an ideal, but may never implement completely". It can hardly be defined, but is similar to what a federal judge once commented about obscenity: "I know it when I see it". Definitions: The user perspective is concerned with the appropriateness of the product for a given context of use. Whereas the transcendental view is ethereal, the user view is more concrete, grounded in the product characteristics that meet user's needs. The manufacturing perspective represents quality as conformance to requirements. This aspect of quality is stressed by standards such as ISO 9001, which defines quality as "the degree to which a set of inherent characteristics fulfills requirements" (ISO/IEC 9001). The product perspective implies that quality can be appreciated by measuring the inherent characteristics of the product. Definitions: The final perspective of quality is value-based. This perspective recognizes that the different perspectives of quality may have different importance, or value, to various stakeholders.The problem inherent in attempts to define the quality of a product, almost any product, were stated by the master Walter A. Shewhart. The difficulty in defining quality is to translate future needs of the user into measurable characteristics, so that a product can be designed and turned out to give satisfaction at a price that the user will pay. This is not easy, and as soon as one feels fairly successful in the endeavor, he finds that the needs of the consumer have changed, competitors have moved in, etc. Definitions: Quality is a customer determination, not an engineer's determination, not a marketing determination, nor a general management determination. It is based on the customer's actual experience with the product or service, measured against his or her requirements -- stated or unstated, conscious or merely sensed, technically operational or entirely subjective -- and always representing a moving target in a competitive market. Definitions: The word quality has multiple meanings. Two of these meanings dominate the use of the word: 1. Quality consists of those product features which meet the need of customers and thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies. Nevertheless, in a handbook such as this it is convenient to standardize on a short definition of the word quality as "fitness for use". Definitions: Tom DeMarco has proposed that "a product's quality is a function of how much it changes the world for the better." This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality. Definitions: Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." This definition stresses that quality is inherently subjective—different people will experience the quality of the same software differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?". Definitions: Other meanings and controversies One of the challenges in defining quality is that "everyone feels they understand it" and other definitions of software quality could be based on extending the various descriptions of the concept of quality used in business. Software quality also often gets mixed-up with Quality Assurance or Problem Resolution Management or Quality Control or DevOps. It does over-lap with before mentioned areas (see also PMI definitions), but is distinctive as it does not solely focus on testing but also on processes, management, improvements, assessments, etc. Measurement: Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed through testing [see main article: Software testing]. However, testing isn't enough: According to a study, individual programmers are less than 50% efficient at finding bugs in their own software. And most forms of testing are only 35% efficient. This makes it difficult to determine [software] quality. Measurement: Introduction Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above. Measurement: The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions. Measurement: The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left): Application Architecture Practices Coding Practices Application Complexity Documentation Portability Technical and Functional VolumeCorrelations between programming errors and production defects unveil that basic code errors account for 92 percent of the total errors in the source code. These numerous code-level issues eventually count for only 10 percent of the defects in production. Bad software engineering practices at the architecture levels account for only 8 percent of total defects, but consume over half the effort spent on fixing problems, and lead to 90 percent of the serious reliability, security, and efficiency issues in production. Measurement: Code-based analysis Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions tokens control structures (Complexity), and objects.Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured]. Measurement: This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration, a repository of vulnerabilities in the source code that make applications exposed to security breaches. Measurement: The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application. Measurement: Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities. Measurement: Reliability The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation. Measurement: Assessing reliability requires checks of at least the following software engineering best practices and technical attributes: Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software. Measurement: Efficiency As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data. Measurement: Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes: Application Architecture Practices Appropriate interactions with expensive and/or remote resources Data access performance and data management Memory, network and disk space management Compliance with Coding Practices (Best coding practices) Security Software quality includes software security. Many security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. These are well documented in lists maintained by CWE, and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University.Assessing security requires at least checking the following software engineering best practices and technical attributes: Implementation, Management of a security-aware and hardening development process, e.g. Security Development Lifecycle (Microsoft) or IBM's Secure Engineering Framework. Measurement: Secure Application Architecture Practices Multi-layer design compliance Security best practices (Input Validation, SQL Injection, Cross-Site Scripting, Access control etc.) Secure and good Programming Practices Error & Exception handling Maintainability Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code.Assessing maintainability requires checking the following software engineering best practices and technical attributes: Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent, and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code. Measurement: Size Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size: There are several software technical sizing methods that have been widely described. The most common technical sizing method is number of Lines of Code (#LOC) per technology, number of files, functions, classes, tables, etc., from which backfiring Function Points can be computed; The most common for measuring functional size is function point analysis. Function point analysis measures the size of the software deliverable from a user's perspective. Function point sizing is done based on user requirements and provides an accurate representation of both size for the developer/estimator and value (functionality to be delivered) and reflects the business functionality being delivered to the customer. The method includes the identification and weighting of user recognizable inputs, outputs and data stores. The size value is then available for use in conjunction with numerous measures to quantify and to evaluate software delivery and performance (development cost per function point; delivered defects per function point; function points per staff month.).The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries. Measurement: Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured. Measurement: One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications. Measurement: CISQ defines Sizing as to estimate the size of software to support cost estimating, progress tracking or other related software project management activities. Two standards are used: Automated Function Points to measure the functional size of software and Automated Enhancement Points to measure the size of both functional and non-functional code in one measure. Measurement: Identifying critical programming errors Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk.These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical. Measurement: Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below: Reliability Avoid software patterns that will lead to unexpected behavior (Uninitialized variable, null pointers, etc.) Methods, procedures and functions doing Insert, Update, Delete, Create Table or Select must include error management Multi-thread functions should be made thread safe, for instance servlets or struts action classes must not have instance/non-final static fields Efficiency Ensure centralization of client requests (incoming and data) to reduce network traffic Avoid SQL queries that don't use an index against large tables in a loop Security Avoid fields in servlet classes that are not final static Avoid data access without including error management Check control return codes and implement error handling mechanisms Ensure input validation to avoid cross-site scripting flaws or SQL injections flaws Maintainability Deep inheritance trees and nesting should be avoided to improve comprehensibility Modules should be loosely coupled (fanout, intermediaries) to avoid propagation of modifications Enforce homogeneous naming conventions Operationalized quality models Newer proposals for quality models such as Squale and Quamoco propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption. Trivia: "A science is as mature as its measurement tools." "I know it when I see it." "You cannot control what you cannot measure." (Tom DeMarco) "You cannot inspect quality into a product." (W. Edwards Deming) "The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten." (Anonymous) "If you don't start with a spec, every piece of code you write is a patch." (Leslie Lamport)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Detection** Detection: In general, detection is the action of accessing information without specific cooperation from with the sender. Detection: In the history of radio communications, the term "detector" was first used for a device that detected the simple presence or absence of a radio signal, since all communications were in Morse code. The term is still in use today to describe a component that extracts a particular signal from all of the electromagnetic waves present. Detection is usually based on the frequency of the carrier signal, as in the familiar frequencies of radio broadcasting, but it may also involve filtering a faint signal from noise, as in radio astronomy, or reconstructing a hidden signal, as in steganography. Detection: In optoelectronics, "detection" means converting a received optical input to an electrical output. For example, the light signal received through an optical fiber is converted to an electrical signal in a detector such as a photodiode. Detection: In steganography, attempts to detect hidden signals in suspected carrier material is referred to as steganalysis. Steganalysis has an interesting difference from most other types of detection, in that it can often only determine the probability that a hidden message exists; this is in contrast to the detection of signals which are simply encrypted, as the ciphertext can often be identified with certainty, even if it cannot be decoded. In the military, detection refers to the special discipline of reconnaissance with the aim to recognize the presence of an object in a location or ambiance. Detection: Finally, the art of detection, also known as following clues, is the work of a detective in attempting to reconstruct a sequence of events by identifying the relevant information in a situation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FMRI lie detection** FMRI lie detection: fMRI lie detection is a field of lie detection using functional magnetic resonance imaging (fMRI). FMRI looks to the central nervous system to compare time and topography of activity in the brain for lie detection. While a polygraph detects anxiety-induced changes in activity in the peripheral nervous system, fMRI purportedly measures blood flow to areas of the brain involved in deception. History: Psychiatrist and scientific researcher Daniel Langleben was inspired to test lie detection while he was at Stanford University studying the effects of a drug on children with attention deficit disorder (ADD). He found that these children have a more difficult time inhibiting the truth. He postulated that lying requires increased brain activity compared to truth because the truth must be suppressed, essentially creating more work for the brain. In 2001, he published his first work with lie detection using a modified form of the Guilty Knowledge Test, which is sometimes used in polygraph tests. The subjects, right-handed, male college students, were given a card and a Yes/No handheld clicker. They were told to lie to a computer asking questions while they underwent a brain scan only when the question would reveal their card. The subjects were given $20 for participating, and told they would receive more money if they deceived the computer; however, none did.His studies showed that the inferior and superior prefrontal and anterior cingulate gyri and the parietal cortex showed increased activity during deception. In 2002, he licensed his methods for lie detection to the No Lie MRI company located in San Diego, California. Working: As "Prospects of fMRI as a Lie Detector" states, fMRIs use electromagnets to create pulse sequences in the cells of the brain. The fMRI scanner then detects the different pulses and fields that are used to distinguish tissue structures and the distinction between layers of the brain, matter type, and the ability to see growths. The functional component allows researchers to see activation in the brain over time and assess efficiency and connectivity by comparing blood use in the brain, which allows for the identification of which portions of the brain are using more oxygen, and thus being used during a specific task. This is called the blood-oxygen-level-dependent (BOLD) hemodynamic response.FMRI data have been examined through the lens of machine learning algorithms to decode whether subjects believed or disbelieved statements, ranging from mathematical, semantic to religious belief statements. In this study, independent component features were used to train the algorithms, achieving up to 90% accuracy on predicting a subjects response, when prompted to indicate with a button press whether they believed or disbelieved a given assertion. Brain activation: Activation of BA 40, the superior parietal lobe, the lateral left MRG, the striatum, and left thalamus was unique to truth while activation of the precuneus, posterior cingulate gyrus, prefrontal cortex, and cerebellum will be used to show a similar network for truth and lie. The most brain activity occurs in both sides of the prefrontal cortex, which is linked to response inhibition. This indicates that deception may involve inhibition of truthful responses. Overall bilateral activation occurs in deception in the middle frontal gyrus, parahippocampal gyrus, the precuneus, and the cerebellum. When looking into the different styles of lying we see differentiation in the locations of activation. Spontaneous lies require retrieval from the semantic and episodic memory to be able to quickly formulate a viable situation that remains in working memory while visual images are created to further hide the truth. The areas associated with this retrieval, the ventrolateral prefrontal cortex, anterior prefrontal cortex, and precuneus, are activated as well as the dorsolateral prefrontal cortex, anterior cingulate, and posterior visual cortex are activated. The anterior cingulate cortex is used for cross-checking and probability. For well-rehearsed, memorized, and coherent lies episodic memory activation is needed. This creates increased activation in the right anterior prefrontal cortex, BA 10, and the precuneus. The Parahippocampal cortex may be used in this process to generalize lies to situations because no cross-checking is needed. Newer studies have considered the salience of lying in a variety of situations. If a lie is of lower salience activation is broader and general while salient lies have specific activation in regions associated with inhibition and selection. Many areas are much more active in lying than truth possibly meaning its harder to retrieve false information compared to true memories because truth has more encoded retrieval cues. Interestingly, the limbic system, which is involved in many different emotional responses including the sympathetic nervous system, is not activated in deception. Legality: Historically, fMRI lie detector tests have not been allowed into evidence in legal proceedings, the most famous attempt being Harvey Nathan's insurance fraud case in 2007. This pushback from the legal system may be based on the 1988 Federal Employment Polygraph Protection Act that acts to protect citizens from incriminating themselves and the right to silence. The legal system specifically would require many more studies on the negative false rate to decide if the absence of deception proves innocence. The lack of legal support has not stopped companies like No Lie MRI and CEPHOS from offering private fMRI scans to test deception. Legality: There is potential to use fMRI evidence as a more advanced form of lie detection, particularly in identifying the regions of the brain involved in truth telling, deception, and false memories. Legality: False memories are a barrier in validating witness testimonies. Research has shown that when presented a list of semantically related words, participant recollection can often be unintentionally false and additive of words that were not originally present. This is a normal psychological occurrence, but presents numerous problems to a jury when attempting to sort out the facts of a case.fMRI imaging is also being used to analyze brain activity during intentional lies. Findings have shown that the dorsolateral prefrontal cortex activates when subjects are pretending to know information, but that the right anterior hippocampus activates when a subject presents false recognition in contrast to lying or accurately telling a truth. This indicates that there may be two separate neural pathways for lying and false memory recall. However, there are limitations to how much brain imaging can distinguish between truths and deceptions because these regions are common areas of executive control function; It is difficult to tell if the activation seen is due to the lie told, or something unrelated.Future research aims to differentiate between when someone has genuinely forgotten an experience and when someone has made an active choice to withhold or fabricate information. Developing this distinction to the point of scientific validity would help discern when defendants are being truthful about their actions and when witnesses are being truthful about their experiences. Pros and cons: While fMRI studies on deception have claimed detection accuracy as high as 90% many have problems with implementing this style of detection. At a basic level administering, fMRIs is extremely difficult and costly. Only yes or no answers can be used which allows for flexibility in the truth and style of lying. fMRI requires the participant to remain still for long periods and little movements can create issues with the scan. Some people are unable to take one such as those with medical conditions, claustrophobia, or implants. When looking at deception specifically, there is little research on non-compliant individuals. The criminal justice system interacts with many types of criminals that are not often taken into account in fMRI studies such as addicts, juveniles, mentally unstable, and the elderly. Studies have been done on Chinese individuals and their language and cultural differences did not change results, as well as a study (S. Spence 2011) on that 52 schizophrenic patients, 27 of whom were experiencing delusions at the time of the study. While these studies are promising, the lack of extensive research on the populations that would be most affected by fMRIs being admitted into the legal system is a huge drawback. As well, fMRI deception tests look only at changes in activity in the brain which similarly to the polygraph does not show directly that lying is occurring. If dealing with complex styles of lying or questions the need for a control condition is critical to differentiate from other higher emotional states unrelated to deception. Some studies, such as Ganis et al.., have shown that it is possible to fool an fMRI by learning countermeasures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**I-cell** I-cell: Fritz Heinrich Jakob Lewy, a German-American neurologist, first identified and described inclusions in the brain cells of patients with Parkinson’s disease and published his findings in the Lewandowsky’s Handbook of Neurology in 1912. I-cells also called inclusion cells are abnormal fibroblasts having a large number of dark inclusions in the cytoplasm of the cell (mainly in the central area). They are metabolically inactive structures of a cell and are not enclosed by a membrane. The inclusions are of various fats, proteins, carbohydrates, pigments, excretory products, crystals, and other insolubles. They are found in the cytoplasm of a cell in both prokaryotes and eukaryotes. They are seen in Mucolipidosis II, and Mucolipidosis III, also called inclusion-cell or I-cell disease where lysosomal enzyme transport and storage is affected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDKN3** CDKN3: Cyclin-dependent kinase inhibitor 3 is an enzyme that in humans is encoded by the CDKN3 gene.The protein encoded by this gene belongs to the dual specificity protein phosphatase family. It was identified as a cyclin-dependent kinase inhibitor, and has been shown to interact with, and dephosphorylate CDK2 kinase, thus prevent the activation of CDK2 kinase. This gene was reported to be deleted, mutated, or overexpressed in several kinds of cancers. Interactions: CDKN3 has been shown to interact with Cyclin-dependent kinase 2, Cdk1 and MS4A3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Angiotensin II receptor type 2** Angiotensin II receptor type 2: Angiotensin II receptor type 2, also known as the AT2 receptor is a protein that in humans is encoded by the AGTR2 gene. Function: Angiotensin II is a potent pressor hormone and a primary regulator of aldosterone secretion. It is an important effector controlling blood pressure and volume in the cardiovascular system. It acts through at least two types of receptors termed AT1 and AT2. AGTR2 belongs to a family 1 of G protein-coupled receptors. It is an integral membrane protein. It plays a role in the central nervous system and cardiovascular functions that are mediated by the renin–angiotensin system. This receptor mediates programmed cell death (apoptosis). In adults, it is highly expressed in myometrium with lower levels in adrenal gland and fallopian tube. It is highly expressed in fetal kidney and intestine. The human AGTR2 gene is composed of three exons and spans at least 5 kb. Exons 1 and 2 encode for 5' untranslated mRNA sequence and exon 3 harbors the entire uninterrupted open reading frame.Stimulation of AT2 by the selective agonist CGP 42112A increases mucosal nitric oxide production. Model organisms: Model organisms have been used in the study of AGTR2 function. A conditional knockout mouse line, called Agtr2tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty one tests were carried out on mutant mice, but no significant abnormalities were observed. Gene: Angiotensin II receptor type 2 (AGTR2) gene is a protein coding gene responsible for encoding AGTR2, the integral membrane protein that binds to two different G-protein coupled receptors. AGTR2 has recently been discovered to play a role in modifying lung disease. This receptor functions to mediate signaling in lung fibrosis and regulate nitric oxide synthase expression in pulmonary endothelium. AGTR2 has recently been prescribed as a target for lung inflammation therapy in cases of cystic fibrosis (CF). The X-chromosome region associated with CF lung disease is located in a non-coding region 3′ of the AGTR2 gene. The modification effect is likely due to variation in gene regulation rather than a change in protein coding sequence. Gene: Variants at the X-chromosome locus containing AGTR2 gene were identified as significantly associating with lung function in patients with cystic fibrosis. Genetically modified mouse studies determined that absence of the AGTR2 gene normalized pulmonary function indicators in two independent CF mouse models. Furthermore, pharmacological antagonism of AGTR2 signaling improved lung function in CF mice to near wild-type levels. Manipulation of the angiotensin-signaling pathway to reduce AGTR2 signaling may be translatable for the treatment or prevention of CF. Interactions: Angiotensin II receptor type 2 has been shown to interact with MTUS1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Boruto: Naruto Next Generations** Boruto: Naruto Next Generations: Boruto is a Japanese manga series written by Ukyō Kodachi and Masashi Kishimoto, and illustrated by Mikio Ikemoto. It initially began monthly serialization under the title Boruto: Naruto Next Generations, with Kodachi as writer and Kishimoto as editorial supervisor in Shueisha's shōnen manga magazine Weekly Shōnen Jump in May 2016, and was transferred to Shueisha's monthly magazine V Jump in July 2019. In November 2020, Kodachi stepped down, with Kishimoto taking over as writer. In April 2023, the series concluded the first part of the story and, following a brief hiatus, it is set to continue with a second part, titled Boruto: Two Blue Vortex, in August of the same year. Boruto is a spin-off and a sequel to Kishimoto's Naruto, which follows the exploits of Naruto Uzumaki's son, Boruto Uzumaki, and his ninja team. Boruto: Naruto Next Generations: Boruto originated from Shueisha's proposal to Kishimoto on making a sequel to Naruto. However, Kishimoto rejected this offer and proposed his former assistant Mikio Ikemoto to draw it; the writer of the film Boruto: Naruto the Movie, Ukyō Kodachi, created the plot. A 293-episode anime television series adaptation, produced by Pierrot, was broadcast on TV Tokyo from April 2017 to March 2023; a second part has been announced to be in development. Unlike the manga, which began as a retelling of the Boruto film, the anime begins as a prequel set before Boruto and his friends become ninjas in a later story arc. A series of light novels have also been written. Boruto: Naruto Next Generations: The anime series has earned praise for its use of both new and returning characters, but the narrative of the manga was noted to be more serious as it focused more on the protagonist. Shueisha has shipped a million copies of the manga series by January 2017. Plot: Opening with a teenaged Boruto Uzumaki facing a teenaged Kawaki in the ruins of Konoha, the former recounts his story. The son of Seventh Hokage Naruto Uzumaki, Boruto feels angry over his father placing the village before his family. At that time, Boruto becomes a member of a ninja team led by Naruto's protégé Konohamaru Sarutobi, alongside Sarada Uchiha, the daughter of Sasuke and Sakura Uchiha, and Mitsuki, Orochimaru's artificial son. Sasuke returns to the village to warn Naruto of an impending threat relating to deduce the motivations of Kaguya Ōtsutsuki. Boruto asks Sasuke to train him for the upcoming Chunin exam to impress his father. During the exam, Momoshiki and Kinshiki Ōtsutsuki, the duo whom Sasuke met, abduct Naruto so they can use Kurama, a tailed beast sealed inside his body, to revitalize the dying Divine Tree from the dimension they came from. Boruto, Sasuke and the four Kages, the leaders of other ninja villages, set out to rescue Naruto. The battle ends when Momoshiki, sacrificing Kinshiki to increase his own strength, is defeated by Boruto and Naruto with Sasuke's help; Momoshiki survives long enough to realize Boruto's full potential while warning him of future tribulations and giving him a mysterious mark called "Karma". After recovering from his fight, Boruto decides to become like Sasuke in the future, while entrusting Sarada to follow her dream of becoming the next Hokage. Plot: Naruto and the other learn there is a group called "Kara" searching for people with the marks called Karma. Boruto's team meets Kara's fugitive Kawaki, a boy who also has Karma. Kawaki becomes an adopted member of the Uzumaki family to protect him. However, when trying to protect Kawaki, Naruto and Sasuke are defeated by the leader of Kara, Jigen, who seals Naruto away while Sasuke escapes. Team 7 saves Naruto when Boruto's Karma causes him to be possessed by Momoshiki. After learning of this, Sasuke discovers all Karma users will be taken over by the Otsutsuki clan, including Jigen and Boruto. Meanwhile, a mutiny starts to form in Kara, with Koji Kashin challenging Jigen, while Amado goes to Konoha to seek asylum in exchange for information, revealing the real leader of Kara as Isshiki, who has been possessing Jigen ever since he was betrayed by Kaguya when they came to Earth millennia ago, and that Karma allows the Otsutsuki clan to resurrect via the host's body. Although Koji kills Jigen, forcing Isshiki to reincarnate imperfectly while Kawaki's Karma is removed in the process, Isshiki forces Koji to retreat and leaves to attack Konoha. Plot: At Konoha, Isshiki attempts to find Kawaki, but Naruto faces him head-on, preparing to fight. Boruto transports himself and Isshiki to another dimension away from the village, with Sasuke and Naruto following. Since Boruto is Momoshiki's vessel, Isshiki plans to feed him to his Ten Tails in order to plant a Divine Tree. With Naruto's new power, the Baryon Mode, he kills Isshiki, but at the cost of Kurama's life. Boruto, possessed by Momoshiki, stabs and destroys Sasuke's Rinnegan. However, Sasuke and Kawaki face Momoshiki until Boruto recovers his body. After being defeated, Isshiki requests Code, who was guarding the Ten-Tails, to carry on the Otsutsuki's will by sacrificing either Boruto or Kawaki and becoming Ōtsutsuki himself. Code vows to avenge Isshiki, and proceeds to release two of the strongest cyborgs created by Amado that was supposed to have been disposed off, Eida and Daemon. The female cyborg Eida agrees to help Code kill Naruto if he in turn spares Kawaki for her to have a normal romance with, because her powers of seduction hinders her from experiencing proper love except with Otsutsuki. Plot: Amado gives Kawaki a weaponized version of Isshiki's Karma, which he uses to assist Boruto in fighing Code. However, Momoshiki takes over Boruto's body, forcing Kawaki to kill him on Boruto's orders. However, Momoshiki revives Boruto as an Otsutsuki, at the cost of his own reincarnation. Both Eida and Daemon are revealed to have been reprogrammed by Amado, and turn on Code, forcing him to flee. In the aftermath, Amado reveals that Eida's and Daemon's powers are shinjutsu transplanted from the corpse of Shibai Otsutsuki, an Otsutsuki who achieved godhood and transcended to another plane. He defines shinjutsu as divine abilities more powerful than ninjutsu which can only be used by gods, including the Karma. Meanwhile, Momoshiki appears in Boruto's mind, showing him a vision of his friends fighting him. Kawaki, having deduced that Boruto, being a full-fledged Otsutsuki, is likely to turn evil, sends Naruto and Hinata into another dimension, vowing to kill Boruto and all other Otsutsukis. Boruto confronts Kawaki, and in the ensuing fight, Kawaki slashes out Boruto's right eye. Sasuke arrives and tries to stop Kawaki, but the latter manages to escape. Kawaki meets with Eida, who uses her Senrigan dojutsu to rewrite everyone's memory: Kawaki and Boruto will have swapped places. Only Sumire Kakei and Sarada are immune, with the latter awakening her Mangekyou Sharingan. Sarada is able to convince Sasuke of Boruto's innocence, with Sasuke swearing to protect Boruto. Production: When the Naruto manga ended in 2014, the company Shueisha asked Masashi Kishimoto to draw the sequel. Kishimoto rejected the idea and proposed artist Mikio Ikemoto, who had been working as an assistant for Kishimoto ever since Naruto's early chapters, to draw it instead. A countdown website titled "Next Generation" was used to promote the new manga. In December 2015, the Boruto: Naruto Next Generations's serialisation was announced. Kishimoto said he wanted Boruto to surpass his own work. The writer of Boruto, Ukyō Kodachi, had written a light novel called Gaara Hiden (2015) and had assisted Kishimoto in writing the script for the film Boruto: Naruto the Movie. Besides writing for the series, Kodachi supervises the story of the anime. Kishimoto also acted as the supervisor of the anime for episodes 8 and 9. Kodachi explained that the series' setting which is notable for handling more science than Naruto was influenced by his father, a physician. In order to further combine the use of ninjutsu and technology, Kodachi was inspired by sci-fi role playing games.Despite Kishimoto revising the manga's scenario, he advised Ikemoto to make his own art style instead of imitating his. Ikemoto agreed and felt optimistic about his art style. While noting long-time fans might be disappointed Kishimoto is not drawing Boruto, Ikemoto stated he would do his best in making the manga. While feeling honoured to create the art for Boruto, Ikemoto stated he is grateful the series is released monthly rather than weekly because producing the required amount of nearly 20 pages per chapter would be stressful; however, he still finds the monthly serialisation challenging. Regular chapters of Boruto tend to exceed 40 pages; creation of the thumbnail sketches takes a week, the pages take 20 days to produce, while the rest of the time is used for colouring images and retouching the chapters. In drawing the characters, Ikemoto felt that the facial expressions of Boruto changed as the story moved on; Initially giving the protagonist large eyes for the character's interactions with Tento, Boruto's appearance was made more rebellious when he instead talked with Kawaki.Despite having a lighter tone than Naruto, the series begins by hinting at a dark future. This set-up was proposed by Kishimoto to give the manga a bigger impact and to take a different approach than the one from the Boruto movie. In this scenario, Ikemoto drew an older Boruto, but he believes this design may change once the manga reaches this point. In early 2019, Ikemoto stated the relationship between Boruto and Kawaki would be the biggest focus on the plot as it would progress until their fight in the flashforward. Ikemoto aims to give the series nearly 30 volumes to tell the story. Kodachi drew parallels between Boruto and the post-Cold War era, stating that while the new characters are living in a time of peace, something complicated might bring the world back to chaos.Although Kishimoto initially was not writing the series, he created multiple characters for the staff to use. Kishimoto did not specify whether Naruto or another important character would die, but he said he would find a situation like this interesting and added that the authors have freedom to write the story as they wish. In November 2020 it was announced that after 51 chapters and 13 volumes, Kodachi would step down as writer, with Kishimoto assuming full writing duties and Ikemoto continuing as illustrator beginning with chapter 52 in V Jump magazine on 21 November 2020. Publication: Boruto: Naruto Next Generations is written by Ukyō Kodachi (vol.1–13) and Masashi Kishimoto (vol.14–) and illustrated by Mikio Ikemoto. It started in Shueisha's shōnen manga magazine Weekly Shōnen Jump on 9 May 2016. It ran in the magazine until 10 June 2019 and was then transferred to V Jump on 20 July of the same year. The original series' creator, Masashi Kishimoto, initially supervised the manga, which was illustrated by his former chief assistant and written by the co-writer of the Boruto: Naruto the Movie screenplay, Ukyō Kodachi. In November 2020 Kodachi stepped down, with Kishimoto taking over as writer. In order to keep the entire Naruto saga within a hundred volumes, Ikemoto hopes to complete the manga in fewer than 30 volumes. In April 2023, it was announced that the manga would enter on hiatus; it is set to resume on 21 August of the same year, with a second part titled Boruto: Two Blue Vortex.Viz Media licensed the manga for English release in North America in 2017 and released the first volume alongside the English dub of Boruto: Naruto the Movie.A spin-off manga titled Boruto: Saikyo Dash Generations, written by Kenji Taira, was serialised in Saikyō Jump from the March 2017 to March 2021 issues, with its chapters collected in four volumes. Related media: Anime At the Naruto and Boruto stage event at Jump Festa on 17 December 2016, it had been announced that the manga series would be adapted into an anime project, which was later confirmed to be a television series adaptation that would feature an original story. Additionally, an original video animation was previously released as a part of CyberConnect2's video game collection, Naruto Shippuden: Ultimate Ninja Storm Trilogy (2017), which depicts a new mission where Boruto's team has to stop a thief.The television anime series, story supervised by the former manga writer Ukyō Kodachi until episode 216, is animated by Pierrot, with character designs by Tetsuya Nishio and Hirofumi Suzuki. The series premiered on TV Tokyo on 5 April 2017. The episodes are being collected in DVDs in Japan, starting with the first four episodes on 26 July 2017. The idea of choosing Pierrot and TV Tokyo again came from an editor of the Weekly Shonen Jump who found it fitting since there was a timeslot available for a young audience. The series finished its first part with episode 293 on 26 March 2023; a second part was announced to be in development.Viz Media has licensed the series in North America. In promoting the anime, Crunchyroll started sharing free segments of the series in early 2018. On 21 July 2018, it was announced at Comic-Con International: San Diego that the English dub of the anime would premiere on Adult Swim's Toonami programming block beginning on 29 September 2018. In Australia, the anime began airing on ABC Me starting from 21 September 2019. On 21 April 2020, it was announced that episode 155 and onward would be delayed until 6 July 2020, due to the ongoing COVID-19 pandemic. Related media: Soundtrack The music for the series is co-composed by Yasuharu Takanashi and his musical unit, Yaiba. A CD soundtrack titled Boruto Naruto Next Generations Original Soundtrack 1 was released on 28 June 2017. The second soundtrack was released on 7 November 2018. Related media: Novels A series of light novels written by Kō Shigenobu (novels 1–3 and 5) and Miwa Kiyomune (novel 4), with illustrations by Mikio Ikemoto, based on the anime have also been produced. The first one, titled The New Konoha Ninja Flying in the Blue Sky!, was released on 2 May 2017. A second one was released on 4 July 2017, under the title A Call From the Shadows!. The third novel, Those Who Illuminate the Night of Shinobi!, was released on 4 September 2017. The fourth novel, School Trip Bloodwind Records!, was released on 2 November 2017. The fifth novel, The Last Day at the Ninja Academy!, was released on 4 January 2018. Related media: Video games The video game Naruto to Boruto: Shinobi Striker was released on 31 August 2018, and contains characters from both the Boruto and Naruto series. In August 2018, another Boruto game was announced for PC. Titled Naruto x Boruto Borutical Generations, will be free to play, with options to purchase in-game items. The game will be available through the Yahoo! Game service. Boruto Uzumaki also appears as a playable character in the crossover fighting game Jump Force. Reception: Manga The manga has been generally well received in Japan; the compilations appeared as top sellers multiple times. In its release week, the first manga volume sold 183,413 copies. The series has one million copies in print by January 2017. In 2018, the fourth volume of the manga had received an initial print run of 450,000 copies. The manga's first volume also sold well in North America, while the series became the sixth-best-selling serialised manga in 2017 according to ICv2. In 2018's fall, Boruto remained as the fourth best-selling manga in North America.Rebecca Silverman of Anime News Network (ANN) said Boruto appealed to her despite never having gotten into the Naruto manga. She praised how the writers dealt with Boruto's angst without it coming across as "teen whining" and the way Sasuke decides to train him. Amy McNulty of ANN regarded the manga as appealing to fans of the original Naruto series, adding that while Mitsuki has a small role in the story, his side-story helps to expand his origins. Nik Freeman of the same website criticised Boruto's lack of development in comparison with his introduction in Naruto's finale; Freeman also said there are differences between the reasons both the young Naruto and Boruto vandalised their village. Nevertheless, Freeman liked Mitsuki's backstory as he did not feel it retold older stories. Reviewing the first chapter online, Chris Beveridge of The Fandom Post was more negative, complaining about the sharp focus on Naruto and Boruto's poor relationship and the retelling of elements from Boruto: Naruto the Movie; Beveridge also criticised the adaptation of Kishimoto's artwork, but he praised the relationship between Naruto and Sasuke as well as the foreshadowing of a fight involving an older Boruto.Melina Dargis of the same website reviewed the first volume; she looked forward to the development of the characters despite having already watched the Boruto movie; she was also pleased by Mitsuki's role in his own side-story. Leroy Douresseaux of Comic Book Bin recommended the series to Naruto fans, explaining how the new authors managed to use the first volume to establish the protagonists' personalities. Dargis was impressed by the apparent message of the series, which she found was trying to connect to modern audiences with themes such as parental issues and the use of technology, in contrast to Naruto. Douresseaux liked that Boruto's character development had already started by the second volume of the series because it helped readers appreciate him more. The Fandom Post and Comic Book Bin noted the manga made major developments in Boruto's story due how the plot progress in the narrative makes the flashforward more possible and how the new characters get their first death match against Ao in the manga rather than relying on the previous generation. On a more negative review, Manga News criticized the manga for relying on the returning characters Naruto and Sasuke to fight certain Kara villains in the same fashion as Akira Toriyama recycled the heroes Goku and Vegeta during the anime Dragon Ball Super rather than relying on a new protagonist and thus hoped that Boruto and his friends would be more active following events.Kawaki's introduction in the series has been praised for the impact in the storyline and the rival parallels he has with Boruto in the same way the original manga had between Naruto and Sasuke. Game designer Hiroshi Matsuyama praised the debut of Kawaki in the manga due to his involvement in the narrative as well as the fight sequences he takes part of. Reception: Anime The anime was popular with Japanese readers of Charapedia, who voted it the ninth best anime show of Spring 2017. IGN writer Sam Stewart commended the focus on the new generation of ninjas as well as the differences between them and the previous generation. He praised the return of other characters like Toneri Otsutsuki and enjoyed the eye techniques. Stewart applauded the characterisation of both Shikadai and Metal Lee, calling their relationship as well as accidental fight interesting to watch and saying Boruto: Naruto Next Generations improves with each episode. Crunchyroll Brand Manager Victoria Holden joined IGN's Miranda Sanchez to discuss whether Next Generations could live up to the success of the old series while still reviewing previous episodes of the series. According to TV Tokyo, sales and gross profits of Boruto have been highly positive during 2018 taking the top 5 spot. In a Crunchyroll report, Boruto was seen as one of the most streamed anime series from 2018 in multiple countries, most notably the ones from Asia. UK Anime Network listed it as one of the best anime from 2019 for showing appealing original story arcs not present in the original serialization which contrasted the Naruto anime whose original stories failed to attract the audience.In a more comical article, Geek.com writer Tim Tomas compared Boruto with the series The Legend of Korra, since both were different from their predecessors despite sharing themes with them. Sarah Nelkin considered Boruto as a more lighthearted version of the Naruto series, but Amy McNulty praised its 13th episode for the focus on a subplot that had been developing since the first episode because its revelations made the series darker. Stewart agreed with McNulty, commenting that the developers reached the climax of the anime's first story arc. The villain's characterisation also impressed the reviewer. Allega Frank of Polygon mentioned that during the start of both the manga and the anime, multiple fans were worried due to a flashforward in which an older Boruto is facing an enemy named Kawaki who implies Naruto might be dead; his fate left them concerned. The series ranked 80th in Tokyo Anime Award Festival in the Best 100 TV Anime 2017 category.Critics also commented on Boruto's characterisation in the anime. Beveridge applauded the series' first episode, saying he felt Boruto's portrayal was superior to the one from the manga, while other writers enjoyed his heroic traits that send more positive messages to the viewers. Reviewers praised that the returning character Sasuke Uchiha had become more caring toward his daughter, Sarada, the female protagonist of the series, and they felt this highly developed the two characters. Critics felt this further helped to expand the connection between the Uchiha family members — Sasuke, Sakura, and Sarada — due to how their bond is portrayed during the anime's second story arc. Kawaki's fight with Garo was also the most viewed 2021 fight on Crunchyroll's YouTube channel weighted at 30 days.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magnetic switchback** Magnetic switchback: Magnetic switchbacks are sudden reversals in the magnetic field of the solar wind. They can also be described as traveling disturbances in the solar wind that caused the magnetic field to bend back on itself. They were first observed by the NASA-ESA mission Ulysses, the first spacecraft to fly over the Sun's poles. NASA's Parker Solar Probe and NASA/ESA Solar Orbiter both observed switchbacks. Definition: Magnetic (or solar) switchback is a rapid polarity reversals of the radial heliospheric magnetic field. These events have been termed "switchbacks", when referring to the change in magnetic field direction, or "velocity spikes", when referring to the sharp increase in solar wind speed. Observations: Helios 1 and 2 spacecraft observed sudden reversals of the Sun's magnetic field in 1970s. Magnetic switchbacks were then observed by the Ulysses in 1995-1996, during the solar minimum, when the spacecraft detected numerous radial magnetic field polarity inversions. Similar structures were then observed by near-Earth heliospheric spacecraft such as Advanced Composition Explorer. Parker Solar Probe (PSP) observed first switchback on November 6, 2018. Similar effects were observed at distances around and below 0.3 AU, 1 AU, and up to 2.9 AU, and, as noted by Fedorov et al, "the question of whether all such observations relate to the same phenomenon is still open."On 27 September 2020, ESA/NASA Solar Orbiter (SolO) sampled a solar wind stream magnetically connected to a southern hemisphere coronal hole, while it was at 0.98 AU from Sun, and observed a fast solar wind with strong fluctuations of the magnetic field. The structures observed by SolO may effectively stand as the surviving remains of the switchbacks created near Sun and also observed by PSP.Given the phase of the solar cycle, if PSP was in the southern magnetic hemisphere, the solar wind magnetic field should always have had a magnetic polarity oriented inward toward the Sun. Instead, PSP observed thousands of intervals, ranging in duration from seconds to tens of minutes where the speed of the solar wind flow suddenly jumps and the magnetic field orientation rotates by nearly 180° in the most extreme cases, before returning just as quickly to the original solar wind conditions.SolO has found compelling clues as to the origin of magnetic switchbacks during its closest pass by the sun on 25 March 2022. Using the data of the Solar Orbiter Daniele Telloni and Gary Zank and their team came to the conclusion that the theory based on Ulysses data is correct, they "proved that switchbacks occur when there is an interaction between a region of open field lines and a region of closed field lines". Theories: One theory, based on the Ulysses data, suggests that switchbacks are the result of a clash between open and closed magnetic fields. When an open magnetic field line brushes against a closed magnetic loop, they can reconfigure in a process called interchange reconnection – an explosive rearrangement of the magnetic fields that leads to a switchback shape. The open line snaps onto the closed loop, cutting free a hot burst of plasma from the loop, while "gluing" the two fields into a new configuration. That sudden snap throws an S-shaped kink into the open magnetic field line before the loop reseals. The Parker Solar Probe observed its first switchback on November 6, 2018. The observed switchback was close to the developed model.A second theory agrees on the import of interchange reconnection, but differs on the nature of switchbacks themselves. Instead of viewing switchbacks as a kink in a magnetic field line, the second theory suggests it is the signature of a kind of magnetic structure, called a flux rope.Another theory suggests that switchbacks form naturally as the solar wind expands into space.The switchbacks, essentially S-shaped kinks in the magnetic field lines streaming from the Sun, seem to arise from a reconfiguration of open and looped magnetic field lines already in the Sun's atmosphere. When an open magnetic field line encounters a closed magnetic loop they can undergo a process called interchange reconnection. This allows the open magnetic field line to snap into the loop, and allows one side of the formerly closed magnetic loop to connect to solar magnetic field extending outwards into the solar system. This process would create an outward-flowing S-shaped kink in the newly formed open magnetic field line — a shape that tracks with the switchbacks measured by Parker Solar Probe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subtropical Countercurrent** Subtropical Countercurrent: The subtropical countercurrent (STCC) is a narrow eastward ocean current in the central North Pacific Ocean (20–30°N) where the Sverdrup theory predicts a broad westward flow. It originates in the western North Pacific around 20°N, and flows eastward against the northeast trade winds and stretches northeastward to the north of Hawaii. Subtropical Countercurrent: It is accompanied by a subsurface temperature and density front called the subtropical front, in thermal wind relation with the STCC. Furthermore, the STCC maintains a sea surface temperature front during winter and spring. During April and May when the SST front is still strong, the seasonal warming makes the region conductive to atmospheric convection, and surface wind stress curls turn weakly positive along the front on the background of negative curls that drive the subtropical gyre. Subtropical Countercurrent: On the weather timescale, positive wind curls are related to low-pressure systems of a subsynoptic scale in space, energized by surface baroclinicity and latent heat release along the STF front. The SST front also anchors a meridional maximum in column-integrated water vapor, indicating a deep structure of the atmosphere response.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded