text
stringlengths
60
353k
source
stringclasses
2 values
**Sixty second review** Sixty second review: The sixty second review (also known as a silent review or mental review) is a technique used by flight attendants during the critical phases of flight to focus and prepare them for a sudden emergency. Use: How the silent review is performed varies according to different airlines, but the principles and the desired result are all the same. Just prior to take off, and from gear down to landing, flight attendants will be in their jumpseats in a semi-brace position performing their silent review. This can either be a structured set of questions that they mentally go over, or a series of suggested questions that the attendant can think about as they observe the cabin. Use: Structured silent reviews typically use mnemonics, one such being "OLDABC": Operation of exits Loccasion of emergency equipment Drills (brace for impact) Able bodied passengers, selected and used by flight attendants to assist in an evacuation, typically by remaining at the bottom of the escape slide. Brace position Commands (such as "heads down – stay down", "undo seatbelts and come this way")
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bonfire** Bonfire: A bonfire is a large and controlled outdoor fire, used either for informal disposal of burnable waste material or as part of a celebration. Etymology: The earliest recorded uses of the word date back to the late 15th century, with the Catholicon Anglicum spelling it as banefyre and John Mirk's Book of Festivals speaking of a communal fire in celebrations of Saint John's Eve that "was clene bones & no wode & that is callid a bone fyre". The word is thus a compound of "bone" and "fire."In 1755, Samuel Johnson misattributed the origin of the word as a compound of the French "bon" (“good”) and the English "fire" in A Dictionary of the English Language. Regional traditions: In many regions of continental Europe, bonfires are made traditionally on 24 June, the solemnity of John the Baptist, as well as on Saturday night before Easter. Bonfires are also a feature of Walpurgis Night in central and northern Europe, and the celebrations on the eve of St. John's Day in Spain. In Sweden bonfires are lit on Walpurgis Night celebrations on the last day of April. In Finland and Norway bonfires are tradition on Midsummer Eve and to a lesser degree in Easter. Regional traditions: Alpine and Central Europe Bonfire traditions of early spring, lit on the Sunday following Ash Wednesday (Funkensonntag, otherwise called Quadragesima Sunday), are widespread throughout the Alemannic German speaking regions of Europe and in parts of France. The burning of "winter in effigy" at the Sechseläuten in Zürich (introduced in 1902) is inspired by this Alemannic tradition. In Austria, the custom of the "Osterfeuer" or Easter fires is widespread, but also regulated in some cities, districts and countries to hold down the resulting annual peak of PM10-dust emission. There are also "Sonnwendfeuer" (solstice fires) ignited on the evening of 21 June. Regional traditions: Since 1988 "Feuer in den Alpen" (fires in the Alps) have been lit on a day in August on mountains so they can be seen from afar as an appeal for sustainable development of mountain regions.In the Czech Republic, the festival called "Burning the Witches" (also Philip and Jacob Night, Walpurgis Night, or Beltane) takes place on the night between 30 April and 1 May. This is a very old and still observed folk custom and special holiday. On that night, people gather together, light bonfires, and celebrate the coming of spring. In many places people erect maypoles. Regional traditions: The night between 30 April and 1 May was considered magical. The festival was probably originally celebrated when the moon was full closest to the day exactly between the spring equinox and summer solstice. People believed that on this night witches fly to their Sabbath, and indeed this is one of the biggest pagan holidays. People also believed, for example, in the opening of various caves treasures were hidden. The main purpose of this old folk custom was probably a celebration of fertility. Regional traditions: To protect themselves against witches, people lit bonfires in high places, calling these fires "Burning the Witches". Some people took to jumping over the fire in order to ensure youth and fertility. The ash from these fires supposedly had a special power to raise crops, and people also walked their cattle through the ashes to ensure fertility. Regional traditions: Australia In Australia, bonfires are rarely allowed in the warmer months due to fire danger. Legislation about bonfires varies between states, metropolitan and rural regions, local government areas, and property types. For example, in urban areas of Canberra bonfires may be lit around the King's Official Birthday if local fire authorities are notified; however, they are banned the rest of the year. Smaller fires such as campfires and outdoor barbecues are usually permitted outside of fire restriction periods. In the state of Queensland, the rural town of Killarney hosts an annual Bonfire night for the greater community; proceeds support the town's aged care facilities. Regional traditions: Canada Due to their historic connection to Britain and Ireland, the province of Newfoundland and Labrador has many communities that celebrate bonfire nights, particularly Guy Fawkes Night; this is one of the times when small rural communities come together. In the province of Quebec, many communities light bonfires on 24 June to celebrate Saint-Jean-Baptiste Day. France In France, the bonfire celebrates Jean le Baptiste during the Fête de la Saint-Jean ("St John's Day"), first Saturday after the solstice, about 24 June. Like the other countries, it was a pagan celebration of the solstice, or midsummer, but Christianisation transformed it into a Catholic celebration. Regional traditions: India In India, particularly in Punjab, people gather around a bonfire and eat peanuts and sweets during the festival of Lohri to celebrate the winter solstice which occurred during the Indian month of Magh. People have bonfires on communal land. If there has been a recent wedding or a new born in the family, people will have a bonfire outside their house to celebrate this event. The festival falls in the second week of January every year. In the northeastern state of Assam, the harvest festival of Bhogali Bihu is celebrated to mark the end of the harvest season in mid-January. In southern India, particularly in Andhra Pradesh, Tamil Nadu and Mumbai, the Bhogi Festival is celebrated on the last day of Maarkali, which is also the first day of the farming festival of Pongal. People collect unwanted items from their houses and throw them into a bonfire to celebrate. During the ten days of Vijayadashami, effigies of Ravana, his brother Kumbhakarna and son Meghanad are erected and burnt by enthusiastic youths at sunset. Traditionally a bonfire on the day of Holi marks the symbolic annihilation of Holika the demoness as described above. Regional traditions: Iran Chaharshanbe Suri is a fire jumping festival celebrated by Persian people, Kurdish people and some other ethnicities. The event takes place on the eve of the last Wednesday before Nowruz. Loosely translated as Wednesday Light, from the word sur, which means light in Persian, or more plausibly, consider sur to be a variant of sorkh (red) and take it to refer either to the fire itself or to the ruddiness (sorkhi), meaning good health or ripeness, supposedly obtained by jumping over it, is an ancient Iranian festival dating back to at least 1700 BCE of the early Zoroastrian era. Also called the Festival of Fire, it is a prelude to Nowruz, which marks the arrival of spring. The words Chahar Shanbeh mean Wednesday and Suri means red. Bonfires are lit to "keep the sun alive" until early morning. The celebration usually starts in the evening, with people making bonfires in the streets and jumping over them singing "zardi-ye man az toh, sorkhi-ye toh az man". The literal translation is, my yellow is yours, your red is mine. This is a purification rite. Loosely translated, this means you want the fire to take your pallor, sickness, and problems and in turn give you redness, warmth, and energy. There are Zoroastrian religious significance attached to Chahārshanbeh Suri and it serves as a cultural festival for Iranian and Iranic people. Regional traditions: Another tradition of this day is to make special Chaharshanbe Suri Ajil, or mixed nuts and berries. People wear disguises and go door to door knocking on doors as similar to Trick-or-treating. Receiving of the Ajeel is customary, as is receiving of a bucket of water. Regional traditions: Ancient Persians celebrated the last 5 days of the year in their annual obligation feast of all souls, Hamaspathmaedaya (Farvardigan or popularly Forodigan). They believed Faravahar, the guardian angels for humans and also the spirits of dead would come back for reunion. There are the seven Amesha Spenta, that are represented as the haft-sin (literally, seven S's). These spirits were entertained as honored guests in their old homes, and were bidden a formal ritual farewell at the dawn of the New Year. The festival also coincided with festivals celebrating the creation of fire and humans. In Sassanid period the festival was divided into two distinct pentads, known as the lesser and the greater Pentad, or Panji as it is called today. Gradually the belief developed that the 'Lesser Panji' belonged to the souls of children and those who died without sin, whereas 'Greater Panji' was truly for all souls. Regional traditions: Iraq In Iraq, Assyrian Christians light bonfires to celebrate the Feast of the Cross. In addition to the bonfire, every household traditionally hangs a lighted fire in the roof of their house. Regional traditions: Ireland Throughout Ireland, bonfires are lit on the night of 31 October to celebrate Halloween or Samhain. Bonfires are also held on 30 April, particularly in Limerick to celebrate the festival of Bealtaine and on St. John's eve, 23 June, to celebrate Midsummer's eve, particularly in County Cork where it is also known as 'Bonna Night'.In Northern Ireland, bonfires are lit on Halloween, 31 October, and each 11 July, bonfires are lit by many Protestant communities to celebrate the victory of Williamite forces at the Battle of the Boyne, which took place on 12 July 1690. This is often called the "Eleventh night". Bonfires have also been lit by Catholic communities on 9 August since 1972 to protest and commemorate Internment. Regional traditions: Israel In Israel, on the eve of Lag BaOmer, bonfires are lit on to commemorate the Mishnaic sage Rabbi Shimon Bar Yochai who according to tradition died on Lag BaOmer. Rabbi Shimon Bar Yochai is accredited with having composed the Kabalistic work The Zohar (literally "The Shining" - hence the custom of lighting fire to commemorate him). The main celebration takes place at Rabbi Shimon's tomb on Mount Meron in northern Israel, but all over the country bonfires are lit in open spaces. Linked by Modern Jewish tradition to the Bar Kokhba Revolt against the Roman Empire (132-135 CE), Lag BaOmer is very popularly observed and celebrated as a symbol for the fighting Jewish spirit. As Lag Ba'Omer draws near, children begin collecting material for the bonfire: wood boards and planks, old doors, and anything else made of wood. On the night itself, families and friends gather round the fires and youths will burn their bonfires till daybreak. Regional traditions: Italy In Northeast Italy, the celebration Panevin (in English "bread and wine"), Foghera and Pignarûl is held on the evening of Epiphany (5 January). A straw witch dressed with old clothes is placed on a bonfire and burned to ash. The witch symbolizes the past and the direction of the smoke indicates whether the new year is going to be good or bad. Regional traditions: The Northern Italian La vecchia ("the old lady") is a version of the wicker man bonfire effigy, which is burned once a year as part of town festivals. As depicted in the film Amarcord by Federico Fellini, it has a more pagan-Christian connotation when it is burned on Mid-Lent Thursday. In Abbadia San Salvatore, a village in the south of Tuscany, bonfires called fiaccole up to seven meters high are burned during Christmas Eve to warm up people around them waiting for the midnight, following a millenary tradition. In Southern Italy, traditionally bonfires are lit in the night between 16 and 17 January, thought to be the darkest and longest night of the year. The celebration is also linked to the cult of Saint Anthony The Great. Japan Every 16 August, the ancient city of Kyoto holds the Gozan no Okuribi, a Buddhist bonfire-based spectacle, which marks the end of the *O-Bon season. Regional traditions: Luxembourg The Luxembourgish town of Remich annually holds a three-day-long celebration for Carnival (called Fuesend Karneval in Luxembourgish). The celebration of the Remich Fuesend Karneval celebrations concludes with the Buergbrennen, a bonfire that marks the end of winter. Such bonfires are also organised by others towns and villages throughout Luxembourg around the same time, although they only last an evening. Regional traditions: Nepal Bonfire in Nepal is taken almost synonymous with camp-fire. During winter months its quite common to have a bonfire in hotels, resorts, and residential areas, as well as private properties. Bonfires are also lit during Siva ratri in the evening. This holiday is based on the lunar calendar and often falls during month of February. Regional traditions: Nordic Countries In Iceland, bonfires are traditional on New Year's Eve, and on 6 January, which is the last day of the Icelandic Christmas season. In Norway and Denmark, large bonfires are lit on 23 June to celebrate Jonsok or St Hansaften the evening before John the Baptist's birthday. As with many other traditions in Scandinavia, St. Hans is believed to have a pagan origin, the celebration of midsummer's eve. Regional traditions: In Sweden, Walpurgis Night is celebrated on 30 April, and festivities include the burning of a bonfire. In Finland, Estonia, Latvia, and Lithuania, Midsummer Eve is celebrated with large bonfires. Lithuania In Lithuania bonfires are lit to celebrate St John's Eve (aka: Rasos (Dew Holiday)) during the midsummer festival . Bonfires may be lit to keep witches and evil spirits away. Poland In Poland, bonfires are traditionally and still enthusiastic burned during Feast of Saints Peter and Paul, Pentecost day and Saint John Night as Sobótki, ognie świętojańskie(Śląsk, Małopolska, Podkarpacie), Palinocka (Warmia, Mazury, Kaszuby) or Noc Kupały (Mazowsze and Podlasie) on 23/24 June. Regional traditions: On 23 and 24 June, according to ancient custom, an immense number of Polish persons of both sexes repaired to the banks of the San (river), Vistula and Odra river, to consult Fate respecting their future fortunes, jumping through a fire on the Eve of Saint John's was a sure way to health. The leaping of the youths over fire (sobótka) must be a custom derived from remote antiquity. Jan Kochanowski, who died in 1584, mentions it in a song from an ancient tradition. Varro and Ovid relate, that in the Palilia, celebrated in honour of the goddess Pales, on 20 April, the anniversary of the foundation of Rome, the young Romans leaped over burning bundles of hay. In modern Italy, this kind of saltation is continued by the name of Sabatina, though Pope Sergius III prohibited it. Regional traditions: Romania In Romania, in Argeș County, a bonfire is lit on the night of 25 October every year, as a tradition said to be done since the Dacians. It consists in burning of a tall tree, which resembles the body of a god. It is usually done on a high peak, in order to be seen from far away. Regional traditions: Slavic Europe In Bosnia and Herzegovina, Croatia, Serbia and Slovenia, bonfires are traditionally lit on the evening before 1 May, commemorating Labour Day . Bonfires are also being built on the eve of the Christian holiday Easter on so called Holy Saturday and are lit next day early in the morning. This bonfires are called vuzmenka, or vazmenka. The root, Vazam is the Serbo-Croatian word for Easter. Their burning symbolizes the Resurrection of Jesus. In villages far from cities, this tradition is still active. Young men and children all gather on some plane remote from village and start building a bonfire by collecting logs of wood, or pruned branches from vineyards and orchards. Bonfires are also lit on the evening before Saint George's Day on so called Jurjevo (in Croatia, on 24 April according to Gregorian calendar) or Đurđevdan (in Serbia, on 6 May according to Julian calendar). Idea for all this bonfires are probably taken from old Slavic tradition where bonfires were lit to celebrate the arrival of Spring.In Russia, bonfires are traditionally burned on 17 November. Regional traditions: Czech Republic and Slovakia In the Czech Republic and Slovakia, bonfires are also held on the last night of April and are called 'Phillip-Jakob's Night' (FilipoJakubská noc) or "Burning of the Witches" (pálení čarodějnic). They are considered to be historically linked with Walpurgis Night and Beltane. Turkey In Turkey bonfires are lit on Kakava believed to be the awakening day of nature at the beginning of spring. Kakava is celebrated by the Romani people in Turkey on the night of 5-6 May. United Kingdom In the United Kingdom and some Commonwealth countries, bonfires are lit on Guy Fawkes Night a yearly celebration held on the evening of 5 November to mark the failure of the Gunpowder Plot of 5 November 1605, in which a number of Catholic conspirators, including Guy Fawkes, attempted to destroy the House of Lords in London. Regional traditions: In Northern Ireland, bonfires are lit on Halloween, 31 October. and each 11 July, bonfires are lit by many Protestant communities to celebrate the victory of Williamite forces at the Battle of the Boyne, which took place on 12 July 1690. This is often called the "Eleventh night". Bonfires have also been lit by Catholic communities on 9 August since 1972 to protest and commemorate Internment.Historically in England, some time before 1400, fires were lit around Midsummer as a wake in the vigil for St John the Baptist. Folk would awake in the evening, and make three manners of fire: one with only clean bones ("bonys") and no wood called a "bonnefyre", one with clean wood and no bones called a "wakefyre", and the third with both bones and wood, called "Saynt Ionys Fyre". Apparently the original wake fell into "lechery and gluttony", so the church deemed it instead as a fast. Regional traditions: The annual rock and dance music Wickerman Festival takes place in Kirkcudbrightshire, Scotland. Its main feature is the burning of a large wooden effigy on the last night. The Wickerman festival is inspired by the horror film The Wicker Man, a film itself inspired by the Roman accounts of the Celtic Druids ritual burning of a wicker effigy. A ship is also burnt as part of the mid-winter Up Helly Aa festival. Regional traditions: In Biggar, Lanarkshire, a bonfire is lit on Hogmanay (New Year's Eve) to celebrate the end of the old year and the beginning of the New Year. The bonfire takes almost a month to build using whatever combustible materials can be found. It is lit by a senior citizen of the town who is accompanied to the bonfire site (which is by the Corn Exchange in the centre of the town) by the local pipe band and several torchbearers. The celebrations are attended by hundreds of drinking and dancing revellers. During the war years, when a bonfire wasn't allowed, a candle was lit in a biscuit tin to keep the tradition of "burnin' oot the auld year" alive. Regional traditions: United States In New England, on the night before the Fourth of July, towns competed to build towering pyramids, assembled from hogsheads, barrels and casks. They were lit at nightfall, to usher in the celebration. The highest were in Salem, Massachusetts, composed of as many as forty tiers of barrels. The practice flourished in the 19th and 20th centuries, and can still be found in some New England towns.On Christmas Eve in Southern Louisiana, bonfires are built along the Mississippi River levees to light the way for Papa Noël as he moves along the river in his pirogue (Cajun canoe) pulled by eight alligators. This tradition is an annual event in St. James Parish, Louisiana.(See Aggie Bonfire) One of the oldest traditions at Texas A&M University involves the building of a bonfire by students to be burnt before their annual game against The University of Texas. The tradition began in 1909 as little more than a burning trash pile. Eventually students began clearing land in the area, by hand, to harvest thousands of logs needed for its construction. In 1969 Aggie Bonfire set a Guinness world record for tallest bonfire at 109 feet. In 1999, there was an accident where the stack collapsed during construction, killing 12 people and injuring 27 others. The accident led to the university to no longer sanction the building of Bonfire. Since 2002, the student-sponsored group Student Bonfire began building an annual bonfire in the spirit of the original. Farm and garden bonfires: Bonfires are used on farms, in large gardens and allotments to dispose of waste plant material that is not readily composted. This includes woody material, pernicious weeds, diseased material and material treated with persistent pesticides and herbicides. Such bonfires may be quite small but are often designed to burn slowly for several days so that wet and green material may be reduced to ash by frequently turning the unburnt material into the centre. Such bonfires can also deal with turf and other earthy material. The ash from garden bonfires is a useful source of potash and may be beneficial in improving the soil structure of some soils although such fires must be managed with safety in mind. Garden and farm bonfires are frequently smoky and can cause local nuisance if poorly managed or lit in unsuitable weather conditions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NEXPTIME** NEXPTIME: In computational complexity theory, the complexity class NEXPTIME (sometimes called NEXP) is the set of decision problems that can be solved by a non-deterministic Turing machine using time 2nO(1) In terms of NTIME, NEXPTIME=⋃k∈NNTIME(2nk) Alternatively, NEXPTIME can be defined using deterministic Turing machines as verifiers. A language L is in NEXPTIME if and only if there exist polynomials p and q, and a deterministic Turing machine M, such that For all x and y, the machine M runs in time 2p(|x|) on input (x,y) For all x in L, there exists a string y of length 2q(|x|) such that M(x,y)=1 For all x not in L and all strings y of length 2q(|x|) , M(x,y)=0 We know P ⊆ NP ⊆ EXPTIME ⊆ NEXPTIMEand also, by the time hierarchy theorem, that NP ⊊ NEXPTIMEIf P = NP, then NEXPTIME = EXPTIME (padding argument); more precisely, E ≠ NE if and only if there exist sparse languages in NP that are not in P. Alternative characterizations: In descriptive complexity, the sets of natural numbers that can be recognized in NEXPTIME are exactly those that form the spectrum of a sentence, the set of sizes of finite models of some logical sentence.NEXPTIME often arises in the context of interactive proof systems, where there are two major characterizations of it. The first is the MIP proof system, where we have two all-powerful provers which communicate with a randomized polynomial-time verifier (but not with each other). If the string is in the language, they must be able to convince the verifier of this with high probability. If the string is not in the language, they must not be able to collaboratively trick the verifier into accepting the string except with low probability. The fact that MIP proof systems can solve every problem in NEXPTIME is quite impressive when we consider that when only one prover is present, we can only recognize all of PSPACE; the verifier's ability to "cross-examine" the two provers gives it great power. See interactive proof system#MIP for more details. Alternative characterizations: Another interactive proof system characterizing NEXPTIME is a certain class of probabilistically checkable proofs. Recall that NP can be seen as the class of problems where an all-powerful prover gives a purported proof that a string is in the language, and a deterministic polynomial-time machine verifies that it is a valid proof. We make two changes to this setup: Add randomness, the ability to flip coins, to the verifier machine. Alternative characterizations: Instead of simply giving the purported proof to the verifier on a tape, give it random access to the proof. The verifier can specify an index into the proof string and receive the corresponding bit. Since the verifier can write an index of polynomial length, it can potentially index into an exponentially long proof string.These two extensions together greatly extend the proof system's power, enabling it to recognize all languages in NEXPTIME. The class is called PCP(poly, poly). What more, in this characterization the verifier may be limited to read only a constant number of bits, i.e. NEXPTIME = PCP(poly, 1). See probabilistically checkable proofs for more details. NEXPTIME-complete: A decision problem is NEXPTIME-complete if it is in NEXPTIME, and every problem in NEXPTIME has a polynomial-time many-one reduction to it. In other words, there is a polynomial-time algorithm that transforms instances of one to instances of the other with the same answer. Problems that are NEXPTIME-complete might be thought of as the hardest problems in NEXPTIME. We know that NEXPTIME-complete problems are not in NP; it has been proven that these problems cannot be verified in polynomial time, by the time hierarchy theorem. NEXPTIME-complete: An important set of NEXPTIME-complete problems relates to succinct circuits. Succinct circuits are simple machines used to describe graphs in exponentially less space. They accept two vertex numbers as input and output whether there is an edge between them. If solving a problem on a graph in a natural representation, such as an adjacency matrix, is NP-complete, then solving the same problem on a succinct circuit representation is NEXPTIME-complete, because the input is exponentially smaller (under some mild condition that the NP-completeness reduction is achieved by a "projection"). As one simple example, finding a Hamiltonian path for a graph thus encoded is NEXPTIME-complete.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clash squeeze** Clash squeeze: A clash squeeze is a three suit bridge squeeze with a special kind of menace, referred to as clash menace. The clash menace is one that might fall under a winner in the opposite hand, because it can be covered by another card in an opponent's hand. If the clash squeeze can force the opponent to discard his guard, then the clash menace can be cashed separately from the winner opposite. For example, consider this layout of the spade suit: The ♠Q is the clash menace. If, when South plays another suit, West can be forced to discard the ♠K, then the ♠Q and the ♠A can be cashed on separate tricks. Notice the presence of the ♠2, a companion that releases the clash menace to be cashed separately from the ♠A. The ♠2 also serves as a simple menace against East, requiring West to retain his clash-menace guard to allow his partner to guard the suit.Clash squeezes were described and analyzed by Chien-Hwa Wang in Bridge Magazine, in 1956 and 1957. Examples: Here is a simple, positional clash squeeze, with the ♣8 as the clash menace: South leads the ♠6. If West discards the ♥J, the ♥10 becomes a winner. If West discards a diamond, the ♣3 is discarded and the ♦J and ♦8 are cashed. If West discards the ♣9, South discards dummy's ♦8 and cashes the ♣8. Then the ♦3 to dummy allows the ♣10 to score. Note the presence of the Vise theme. Examples: This is a secondary clash squeeze: South cashes the ♥J. If West discards a spade, South will make one of his small spades. If West discards a club, one of dummy's clubs will become a winner. And if West discards the ♦9, South cashes the clash menace, the ♦8. Here is a simultaneous, double clash squeeze: South leads the ♠J and West is clash squeezed. Discarding the ♣J gives up immediately. If West discards a heart, South cashes the ♥10 and ♥7 before crossing to the ♦J for the ♥J. If West discards a diamond, South discards the ♣10 and East is squeezed in the reds. Examples: Finally, here's a non-simultaneous double clash squeeze: This double clash squeeze consists of a clash squeeze against West, followed two tricks later by a simple squeeze against East. South leads the ♦J. West cannot throw the ♣10 because that establishes South's clash menace, and a spade sets up North's spades. So West throws the ♥J. Now South cashes the ♠10 and the ♠J, in that order, to squeeze East in hearts and clubs. Examples: There are other positions, including trump squeezes with clash menaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD 222806** HD 222806: HD 222806 (HR 8995) is a suspected astrometric binary in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.74, allowing it to be faintly seen with the naked eye. Parallax measurements place the system at a distance of 565 light years and it is currently receding with a heliocentric radial velocity of 21 km/s.The visible component has a stellar classification of K1 III, indicating that it is a red giant. At present it has 126% the mass of the Sun, but has expanded to almost 19 times its girth. It radiates at 151 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,865 K, giving it an orange hue. HD 222806 is metal enriched with an iron abundance over twice that of the Sun and is believed to be a member of the young disk population. It spins with a projected rotational velocity lower than 1 km/s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stemmadenine** Stemmadenine: Stemmadenine is a terpene indole alkaloid. Stemmadenine is believed to be formed from preakuammicine by a carbon-carbon bond cleavage. Cleavage of a second carbon-carbon bond is thought to form dehydrosecodine. The enzymes forming stemmadenine and using it as a substrate remain unknown to date. It is thought to be intermediate compound in many different biosynthetic pathways such as in Aspidosperma species. Many alkaloids are proposed to be produced through intermediate stemmadenine. Some of them are: Catharanthine and Tabersonine in Catharanthus roseus Subincanadines D-F in Aspidosperma subincanumIt is also present as product in plant like in Tabernaemontana dichotoma seeds. Pharmacology: It has hypotensive and weak muscle relaxant properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**United States military beret flash** United States military beret flash: In the United States (US) military, a beret flash is a shield-shaped embroidered cloth that is typically 2.25 in (5.72 cm) tall and 1.875 in (4.76 cm) wide with a semi–circular base that is attached to a stiffener backing of a military beret. These flashes—a British English word for a colorful cloth patch attached to military headgear—are worn over the left eye with the excess cloth of the beret shaped, folded, and pulled over the right ear giving it a distinctive appearance.Army soldiers and non-commissioned officers (NCOs) affix their distinctive unit insignia (DUI), regimental distinctive insignia (when no DUI is authorized), Sergeant Major of the Army collar insignia (when assigned), or Senior Enlisted Advisor to the Chairman of the Joint Chiefs of Staff collar insignia (when assigned) to the center of their beret flash. Army warrant officers and commissioned officers affix their polished metal rank insignia to the center of their beret flash while general officer's may choose to affix regular or miniature polished metal rank insignia. Army chaplains affix their polished metal branch insignia, vs their rank insignia, to the center of their beret flash. Air Force commissioned officers who are in the security forces or are weather parachutists wear their beret flash in the same manner as the Army while tactical air control party (TACP) officers attach a miniature version of their polished metal rank insignia below the TACP Crest on the TACP Beret Flash. Other Air Force airmen and NCOs assigned to an Air Force specialty code (AFSC) authorized to wear a military beret with a beret flash will affix either their beret flash or beret flash with crest, depending on the AFSC. Joint beret flashes—such as those worn by the Multinational Force and Observers and the Joint Communications Support Element—are worn by all who are assigned, given their uniform regulations allow, and will wear them in the manner prescribed by the joint unit.The design of all US Department of Defense beret flashes are created and/or approved by The Institute of Heraldry, Department of the Army. When a requesting unit is entitled to have its own organizational beret flash, the institute will conduct research into the requesting unit's heraldry, as well as design suggestions from the requesting unit, in the creation of a unit or specialty beret flash. Leveraging geometrical divisions, shapes, and colors a heraldic artist will create a design that will represent the history and mission of the requesting unit. Once the unit agrees upon a design, the institute will authorize the creation of the new beret flash and will establish manufacturing instructions for the companies authorized to produce heraldic materials. The institute will also monitor the production of the new beret flash to ensure quality and accuracy of the design is maintained. Department of Defense beret flash history: US Army 1940s Throughout its history, Army units have adopted different headgear and headgear devices—such as various colored cords, colored stripes, and insignias—to identify specific units, the unique mission of a unit, and/or the unique role of a soldier. According to some historians, the first US use of a military beret device was a beret flash created by the 509th Parachute Infantry Battalion. The 509th trained with the British 1st Airborne Division during World War II (WWII) and was made honorary members of the British airborne forces in 1943, entitling them to wear the maroon beret worn by British paratroopers. Some 509th paratroopers had a small hand–embroidered version of their regiment's gold and black pocket–patch created for use as their beret flash on their honorary maroon berets. The design of the 509th's pocket–patch, and their first organizational beret flash, depicts a stylized figure of a paratrooper standing at an open aircraft door wearing a reserve parachute with an artistic rendering of the number "509" surrounding the paratrooper's head and the name Geronimo displayed at the base of the door in title case. Department of Defense beret flash history: 1960s The official start of the Army's beret flashes began in 1961 with Department of the Army Message 578636 authorizing the establishment of organizational beret flashes for wear on the special forces' rifle–green beret. Championed and heavily influenced by Lieutenant General William P. Yarborough (Ret.)—creator of the US Army parachutist badge, airborne background trimming, and established the term "beret flash" in US military lexicon—the message described the beret flash as shield–shaped with a semi–circular base made of felt 2 in (51 mm) tall and 1.625 in (41 mm) wide using solid colors to represent each of the special forces groups of the era. The message also described who was authorized to wear the organizational beret flash stating that only special forces qualified paratroopers would be permitted to wear their special forces unit's organizational beret flash. These organizational beret flashes were to be worn centered over the left eye with either the 1st Special Forces Regiment DUI, polished metal officer rank insignia, or chaplain branch insignia positioned below their parachutist badge and centered on the beret flash. Later, the parachutist badge was removed and non–qualified soldiers assigned to a special forces unit wore a rectangular cloth beret flash, known as a recognition bar, 1.875 in (4.76 cm) long and 0.5 in (1.27 cm) wide color and pattern matched to their group's organizational beret flash. The recognition bar was worn below their 1st Special Force Regiment DUI, polished metal officer rank insignia, or chaplain branch insignia on the rifle–green beret. Department of Defense beret flash history: 1970s Various beret accoutrements began to appear in the 1960s and 1970s, particularly between 1973 and 1979 when the Department of the Army had its morale–enhancing order in effect and different colored berets began to be worn by numerous units and branches of the Army.Historical photographs from the 1960s through the 1970s show soldiers assigned to reconnaissance, ranger, and armor units informally wearing black berets with various units affixing a wide variety of custom beret flashes that were worn over the left eye or left temple. In 1975, the Army formally authorized its ranger units to wear the black beret. If earned, some of these ranger units had their rangers affix their Ranger Tab to the top edge of their organizational beret flash along with their regiment or unit DUI, polished metal officer rank insignia, or chaplain branch insignia affixed to its center and worn over the left eye. Department of Defense beret flash history: Wearing of the black beret by armor units expanded in the 1970s with some adopting organizational beret flashes. For example, many US Army armor units stationed in West Germany, such as the 1st Armored Division, 2nd Armored Cavalry Regiment, and 11th Armored Cavalry Regiment, began wearing black berets in the 1970s with the armored cavalry regiments affixing maroon and white ovals for use as their beret flash. The oval beret flash was worn vertically on the black beret behind their DUI to the left of their metal rank insignia or chaplain branch insignia and positioned over the left temple. Another example is the Army's "triple capability" experiment with the 1st Cavalry Division that outfitted the division for armor, airmobile, and air cavalry warfare in 1971. The division decided that its soldiers should wear different colored berets to represent the capability they brought to the division: black for armor, light–blue for infantry, red for artillery, and kelly–green for support—later settling for black berets across all formations. As they became available, 1st Cavalry soldiers would affix a battalion or squadron specific organizational beret flash of various shapes, colors, and materials to their beret. Historical photographs show many 1st Cavalry soldiers wearing their berets in the same manner as US armored cavalry soldiers in West Germany. The use of black berets extended to training units as well, such as the US Army Training and Doctrine Command and its armor school. Historical photographs of the era show plastic triangles being worn on black berets of US Army Armor School cadre and were worn in the same manner as beret flashes are today. Department of Defense beret flash history: In 1973, Army leaders authorized the wear of the maroon beret by airborne forces. Within a year or so, paratroopers of the 82nd Airborne Division began incorporating organizational beret flashes onto their maroon berets pattered after their unit's airborne background trimming. These organizational beret flashes, representing various units of the 82nd, were worn in the same manner as they are today. Similarly, in 1974 Army leaders authorized the 101st Airborne Division to wear the dark–blue beret when it was reorganized into an air assault division at Fort Campbell. Army articles and historical photographs of 101st soldiers show them wearing organizational beret flashes patterned after their unit's airborne background trimming and were affixed with either their polished metal rank insignia, DUI, or chaplain branch insignia centered on the beret flash and worn over the left eye. Between 1976 and 1977, 101st soldiers would affix their Airmobile Badge—renamed Air Assault Badge in 1978—to their berets positioned over their left temple, next to their beret flash. Other Fort Campbell units of the era also wore the dark–blue beret as well as red for headquarters command and light-green for military police, all with traditional organizational beret flashes that were worn in the same manner as they are today. Department of Defense beret flash history: Also during the 1970s, arctic–qualified soldiers of the 172nd Infantry Brigade wore locally authorized olive–drab berets with organizational beret flashes that were unique to each battalion, company, troop, or battery of the brigade and were worn in the same manner as they are today.By 1979, the Army put a stop to the use of berets by conventional forces, leaving only special forces and ranger units the authority to wear berets. Department of Defense beret flash history: 1980s In 1980, the Army reversed part of its decision allowing airborne units to wear maroon berets, ranger units black berets and special forces units rifle–green berets. The Army's 1981 uniform regulation describes the wear of these berets with the only authorized accoutrements being organizational beret flashes or recognition bars with officer rank insignia, chaplain branch insignia, or DUI affixed.The organizational beret flash did not become the norm across the Army until 1984 when the recognition bar was discontinued after the Special Forces Tab became authorized for wear by special forces qualified paratroopers. Today, all paratroopers assigned to a special forces unit wear their unit's organizational beret flash on either a rifle–green beret, for special forces qualified paratroopers, or a maroon beret, for support paratroopers. Department of Defense beret flash history: 2000–present In 2000, the Chief of Staff of the Army, General Eric Shinseki, decided to make the black beret the standard headgear of the Army. This was codified in regulations in 2001 and was amended in 2011 making the black beret optional headgear with certain uniforms. Due to this change, the 75th Ranger Regiment was authorized to switch from black to tan berets in 2001, given the black beret was no longer going to be a distinctive uniform item for the regiment. General Shinseki also decided that a new Department of the Army Beret Flash be worn on the black beret. According to The Institute of Heraldry, the Department of the Army Beret Flash is designed to resemble the flag of the Commander-in-Chief of the Continental Army flown at the siege of Yorktown during the American Revolutionary War; a light–blue flag with thirteen white stars representing the Thirteen Colonies. According to Department of the Army Pamphlet 670–1, the Department of the Army Beret Flash is to be worn by all units "unless authorization for another flash was granted before implementing the black beret as a standard Army headgear." Army units can request an organizational beret flash for their formation from The Institute of Heraldry given it is not for wear on the black beret. A good example of this is The Institute of Heraldry's 2018 authorization of organizational beret flashes for the Security Force Assistance Command and its brigades (SFABs) for wear on their brown beret. Department of Defense beret flash history: In the 21st century, unlike the Department of the Army Beret Flash, Army organizational beret flashes signify a specific formation of a specialized unit, such as an active airborne, ranger, special forces, or combat advisor unit. However, there is a unique generic Special Forces Beret Flash worn by special forces qualified paratroopers on their rifle–green beret when assigned to a unit not authorized an organizational beret flash; this is due to the rifle–green beret now representing a paratrooper's special forces qualification—in addition to the Special Forces Tab—rather than a special forces unit as it once did in the 60s, 70s, and early 80s. Department of Defense beret flash history: US Air Force Weather Parachutists In the mid 1960s, Air Force commando weathermen, formally known as weather parachutists, with Detachment 26 of the 30th Weather Squadron and Detachment 32 of the 5th Weather Squadron informally wore black berets. A black cloth rectangle with a yellow embroidered anemometer surmounted by a fleur–de–lis with the words "Combat Weather" split by the anemometer was used as their beret flash. Department of Defense beret flash history: From 1970 through the 1980s, weather parachutists with the 5th Weather Squadron wore maroon berets with an Army style beret flash that incorporated the squadron's design and colors from their emblem's alchemical symbol for water and affixed their Parachutist Badge to the flash.In 1979, weather parachutists were authorized to wear navy–blue berets with an Army style beret flash consisting of a blue and black field surrounded by yellow piping. Enlisted and NCOs affixed their Parachutist Badge to the flash while officers affixed their polished metal rank insignia. In 1986, the gray beret was authorized for wear by weather parachutists who continued to wear the aforementioned cloth beret flash until a new large color metallic Special Operations Weather Team Crest was authorized.In 1992, the Air Force approved the return of the weather parachutist's blue, black, and yellow beret flash from the 1970s and affixed their large color metal Special Operations Weather Team Crest to it. Department of Defense beret flash history: In 1996, weather parachutists assigned to Air Force Special Operations Command (AFSOC) began wearing a new Army style beret flash, known as the Special Operations Weather Team Beret Flash, while those assigned to Air Combat Command, known as Combat Weather Teams, continued to wear the blue, black and yellow beret flash. The Special Operations Weather Team Beret Flash consisted of a red border representing the blood shed by their predecessors, a black background representing special operations, and three diagonal lines of various colors representing the services they supported (green=Army, purple=joint forces, and blue=Air Force). Officers affixed their polished metal rank insignia while enlisted and NCOs affixed their Parachutist Badge to the Special Operations Weather Team Beret Flash until 2002 when the Combat Weather Team Crest was created. The Combat Weather Team Crest was affixed to both Special Operations Weather Team and Combat Weather Team Beret Flashes by enlisted and NCOs while officers continued to affix their polished metal rank insignia to the appropriate beret flash.In 2007/2008, the Special Operations Weather Team Beret Flash stopped being worn and in 2009—when the Special Operations Weather AFSC was established—a new large polished metal Special Operations Weather Crest was approved for wear by special operations weather teams, with a modified version of the crest being worn by the now redesignated special reconnaissance airman in 2019. Department of Defense beret flash history: Security Forces In 1966/67, the newly formed 1041st Security Police Squadron was authorized to wear a dark–blue beret with a unique organizational beret flash. The 1041st's beret flash has a depiction of a white falcon carrying a pair of lightning bolts on a somewhat pointed oval-shaped light–blue cloth shield that was worn over the left temple.In 1976, the Air Force approved the navy-blue beret as the official uniform item for all Air Force police and security forces.In 1997, the Air Force stood up the security forces AFSC, combining Air Force police and security forces into one carrier field, and honored the heraldry of the 1041st Security Police Squadron by creating a new organizational beret flash for all security forces airman and NCOs. The new Security Forces Beret Flash depicts the 1041st's falcon over an airfield on a blue shield–shaped patch bordered in gold with a white scroll at its base embroidered with the motto "Defensor Fortis" (defenders of the force) in dark–blue title case. Security forces officers wear the same basic beret flash minus the embroidered falcon and airfield and in its place affix their polished metal rank insignia. Department of Defense beret flash history: TACP In 1979, TACP airman and NCOs were given authorization to wear the black beret. In 1984, two TACP's submitted a design for a unique beret flash and crest for wear on their berets which the Air Force approved one year later. The TACP Beret Flash consists of a scarlet border that represent the firepower TACP's bring to bear with two dovetailed fields of blue and green representing the close working relationship between the Air Force and the Army that is enabled by the TACP. TACP officers also wear the TACP Beret Flash and Crest but with miniature polished metal rank insignia below the crest and just above the inner–border of the beret flash.Air liaison officers assigned to an air support operations squadron or group can also be given authorization to wear the black beret and TACP Beret Flash with full-size polished metal officer rank insignia (no crest).Some Air Mobility Liaison Officers also wore the black beret. Although worn informally before then, in 2015 The Institute of Heraldry authorized a slight modification of the TACP Beret Flash for wear by Air Mobility Liaison Officers, incorporating an embroidered compass rose in the upper–left corner of the flash. The Air Mobility Liaison Officer Beret Flash was worn in the same manner as Air Liaison Officers wear the TACP Beret Flash. Department of Defense beret flash history: Combat Aviation Advisors From 2018–2022, AFSOC authorized the wear of the brown beret for airman, NCOs, and officers assigned to what was known as combat aviation advisor squadrons, such as the 6th and 711th Special Operations Squadrons. The brown beret—similar to the Army's brown beret—was worn with an Army style organizational beret flash consisting of a blue field with olive–green diagonal stripes and border. The Combat Aviation Advisor Beret Flash was worn centered over the left eye with polished metal officer rank insignia, chaplain branch insignia, or an AFSC metallic beret crest affixed to the beret flash while all other advisors wore it without accoutrements. Department of Defense beret flash history: US Navy In the 1960s, select US Navy riverine patrol units operating in South Vietnam adopted the black beret to be part of their daily uniform and wore various accouterments on their berets. In 1967, the Commander of the Riverine Patrol Force sent an official message to the Commander of River Patrol Flotilla Five authorizing the wear of the black beret. In this message, the wear and appearance of the beret was defined stating, "Beret will be worn with river patrol force insignia centered on right side" and "Only standard size river patrol force insignia will be worn on beret. ... No other emblem or rank insignia will be displayed on beret." Today, these US Navy small boat units honor their heritage by wearing the black beret during special occasions—such as induction ceremonies into the Gamewardens Association—and will affix historically relevant riverine task force insignia for use as their beret flash. Beret flashes of the US military: Joint Obsolete Army Adjutant general Obsolete Air defense artillery Obsolete Armor and cavalry Obsolete Aviation Obsolete Chemical Obsolete Civil affairs Obsolete Engineers Obsolete Field artillery Obsolete Infantry Obsolete Logistics Obsolete Medical Obsolete Military intelligence Obsolete Military police Obsolete Multidisciplinary units Obsolete Ordnance Psychological operations Obsolete Public affairs Signal Obsolete Special forces Obsolete Training Obsolete Air Force Obsolete State defense forces The US state defense forces—also known as state guard, state military reserve, or state militia—in many US states and territories wear modified versions of Army uniforms. To help separate state guard units from Army units, such as the Army National Guard, they will often wear unique name tape, badges, shoulder sleeve insignia, and/or headgear. If the militia unit chooses to wear the Army black beret, a unique organizational beret flash is worn to help further distinguish them from Army units. These state military reserve organizational beret flashes are worn in the same manner as today's Army organizational beret flashes. The following is a list of some organizational beret flashes worn by various state and territory militias: State or territory specific Obsolete
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Resolution independence** Resolution independence: Resolution independence is where elements on a computer screen are rendered at sizes independent from the pixel grid, resulting in a graphical user interface that is displayed at a consistent physical size, regardless of the resolution of the screen. Concept: As early as 1978, the typesetting system TeX due to Donald Knuth introduced resolution independence into the world of computers. The intended view can be rendered beyond the atomic resolution without any artifacts, and the automatic typesetting decisions are guaranteed to be identical on any computer up to an error less than the diameter of an atom. This pioneering system has a corresponding font system, Metafont, which provides suitable fonts of the same high standards of resolution independence. Concept: The terminology device independent file format (DVI) is the file format of Donald Knuth's pioneering TeX system. The content of such a file can be interpreted at any resolution without any artifacts, even at very high resolutions not currently in use. Implementation: macOS Apple included some support for resolution independence in early versions of macOS, which could be demonstrated with the developer tool Quartz Debug that included a feature allowing the user to scale the interface. However, the feature was incomplete, as some icons did not show (such as in System Preferences), user interface elements were displayed at odd positions and certain bitmap GUI elements were not scaled smoothly. Because the scaling feature was never completed, macOS's user interface remained resolution-dependent. Implementation: On June 11, 2012, Apple introduced the 2012 MacBook Pro with a resolution of 2880×1800 or 5.2 megapixels – doubling the pixel density in both dimensions. The laptop shipped with a version of macOS that provided support to scale the user interface twice as big as it has previously been. This feature is called HighDPI mode in macOS and it uses a fixed scaling factor of 2 to increase the size of the user interface for high-DPI screens. Apple also introduced support for scaling the UI by rendering the user interface on higher or smaller resolution that the laptop's built-in native resolution and scaling the output to the laptop screen. One obvious downside of this approach is either a decreased performance on rendering the UI on a higher than native resolution or increased blurriness when rendering lower than native resolution. Thus, while the macOS's user interface can be scaled using this approach, the UI itself is not resolution-independent. Implementation: Microsoft Windows The GDI system in Windows is pixel-based and thus not resolution-independent. To scale up the UI, Microsoft Windows has supported specifying a custom DPI from the Control Panel since Windows 95. (In Windows 3.1, the DPI setting is tied to the screen resolution, depending on the driver information file.) When a custom system DPI is specified, the built-in UI in the operating system scales up. Windows also includes APIs for application developers to design applications that will scale properly. Implementation: GDI+ in Windows XP adds resolution-independent text rendering however, the UI in Windows versions up to Windows XP is not completely high-DPI aware as displays with very high resolutions and high pixel densities were not available in that time frame. Windows Vista and Windows 7 scale better at higher DPIs. Implementation: Windows Vista also adds support for programs to declare themselves to the OS that they are high-DPI aware via a manifest file or using an API. For programs that do not declare themselves as DPI-aware, Windows Vista supports a compatibility feature called DPI virtualization so system metrics and UI elements are presented to applications as if they are running at 96 DPI and the Desktop Window Manager then scales the resulting application window to match the DPI setting. Windows Vista retains the Windows XP style scaling option which when enabled turns off DPI virtualization (blurry text) for all applications globally. Implementation: Windows Vista also introduces Windows Presentation Foundation. WPF applications are vector-based, not pixel-based and are designed to be resolution-independent. Windows 7 adds the ability to change the DPI by doing only a log off, not a full reboot and makes it a per-user setting. Additionally, Windows 7 reads the monitor DPI from the EDID and automatically sets the DPI value to match the monitor's physical pixel density, unless the effective resolution is less than 1024 x 768. Implementation: In Windows 8, only the DPI scaling percentage is shown in the DPI changing dialog and the display of the raw DPI value has been removed. In Windows 8.1, the global setting to disable DPI virtualization (only use XP-style scaling) is removed. At pixel densities higher than 120 PPI (125%), DPI virtualization is enabled for all applications without a DPI aware flag (manifest) set inside the EXE. Windows 8.1 retains a per-application option to disable DPI virtualization of an app. Windows 8.1 also adds the ability for each display to use an independent DPI setting, although it calculates this automatically for each display. Windows 8.1 prevents a user from forcibly enabling DPI virtualization of an application. Therefore, if an application wrongly claims to be DPI-aware, it will look too small on high-DPI displays in 8.1, and a user cannot correct that.Windows 10 adds manual control over DPI for individual monitors. In addition, Windows 10 version 1703 brings back the XP-style GDI scaling under a "System (Enhanced)" option. This option combines GDI+'s text rendering at a higher resolution with the usual scaling of other elements, so that text appears crisper than in the normal "System" virtualization mode. Implementation: Android Since Android 1.6 "Donut" (September 2009) Android has provided support for multiple screen sizes and densities. Android expresses layout dimensions and position via the density-independent pixel or "dp" which is defined as one physical pixel on a 160 dpi screen. At runtime, the system transparently handles any scaling of the dp units, as necessary, based on the actual density of the screen in use.To aid in the creation of underlying bitmaps, Android categorizes resources based on screen size and density: X Window System The Xft library, the font rendering library for the X11 system, has a dpi setting that defaults to 75. This is simply a wrapper around the FC_DPI system in fontconfig, but it suffices for scaling the text in Xft-based applications. The mechanism is also detected by desktop environments to set its own DPI, usually in conjunction with the EDID-based DisplayWidthMM family of Xlib functions. The latter has been rendered ineffective in Xorg Server 1.7; since then EDID information is only exposed to XRandR.In 2013, the GNOME desktop environment began efforts to bring resolution independence ("hi-DPI" support) for various parts of the graphics stack. Developer Alexander Larsson initially wrote about changes required in GTK+, Cairo, Wayland and the GNOME themes. At the end of the BoF sessions at GUADEC 2013, GTK+ developer Matthias Clasen mentioned that hi-DPI support would be "pretty complete" in GTK 3.10 once work on Cairo would be completed. As of January 2014, hi-DPI support for Clutter and GNOME Shell is ongoing work.Gtk supports scaling all UI elements by integer factors, and all text by any non-negative real number factors. As of 2019, Fractional scaling of the UI by scaling up and then down is experimental. Implementation: Other Although not related to true resolution independence, some other operating systems use GUIs that are able to adapt to changed font sizes. Microsoft Windows 95 onwards used the Marlett TrueType font in order to scale some window controls (close, maximize, minimize, resize handles) to arbitrary sizes. AmigaOS from version 2.04 (1991) was able to adapt its window controls to any font size.Video games are often resolution-independent; an early example is Another World for DOS, which used polygons to draw its 2D content and was later remade using the same polygons at a much higher resolution. 3D games are resolution-independent since the perspective is calculated every frame and so it can vary its resolution.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Göbel's sequence** Göbel's sequence: In mathematics, Göbel's sequence is a sequence of rational numbers defined by the recurrence relation xn=1+x02+x12+⋯+xn−12n, with starting value 1. Göbel's sequence starts with 1, 1, 2, 3, 5, 10, 28, 154, 3520, 1551880, ... (sequence A003504 in the OEIS)The first non-integral value is x43. Generalization: Göbel's sequence can be generalized to kth powers by xn=1+x0k+x1k+⋯+xn−1kn. The least indices at which the k-Göbel sequences assume a non-integral value are 43, 89, 97, 214, 19, 239, 37, 79, 83, 239, ... (sequence A108394 in the OEIS)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Program Segment Prefix** Program Segment Prefix: The Program Segment Prefix (PSP) is a data structure used in DOS systems to store the state of a program. It resembles the Zero Page in the CP/M operating system. The PSP has the following structure: The PSP is most often used to get the command line arguments of a DOS program; for example, the command "FOO.EXE /A /F" executes FOO.EXE with the arguments '/A' and '/F'. Program Segment Prefix: If the PSP entry for the command line length is non-zero and the pointer to the environment segment is neither 0000h nor FFFFh, programs should first try to retrieve the command line from the environment variable %CMDLINE% before extracting it from the PSP. This way, it is possible to pass command lines longer than 126 characters to applications. Program Segment Prefix: The segment address of the PSP is passed in the DS register when the program is executed. It can also be determined later by using Int 21h function 51h or Int 21h function 62h. Either function will return the PSP address in register BX.Alternatively, in .COM programs loaded at offset 100h, one can address the PSP directly just by using the offsets listed above. Offset 000h points to the beginning of the PSP, 0FFh points to the end, etc. Program Segment Prefix: For example, the following code displays the command line arguments: In DOS 1.x, it was necessary for the CS (Code Segment) register to contain the same segment as the PSP at program termination, thus standard programming practice involved saving the DS register (since the DS register is loaded with the PSP segment) along with a zero word to the stack at program start and terminating the program with a RETF instruction, which would pop the saved segment value off the stack and jump to address 0 of the PSP, which contained an INT 20h instruction. Program Segment Prefix: If the executable was a .COM file, this procedure was unnecessary and the program could be terminated merely with a direct INT 20h instruction or else calling INT 21h function 0. However, the programmer still had to ensure that the CS register contained the segment address of the PSP at program termination. Thus, In DOS 2.x and higher, program termination was accomplished instead with INT 21h function 4Ch which did not require the CS register to contain the segment value of the PSP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surface exposure dating** Surface exposure dating: Surface exposure dating is a collection of geochronological techniques for estimating the length of time that a rock has been exposed at or near Earth's surface. Surface exposure dating is used to date glacial advances and retreats, erosion history, lava flows, meteorite impacts, rock slides, fault scarps, cave development, and other geological events. It is most useful for rocks which have been exposed for between 103 and 106 years. Cosmogenic radionuclide dating: The most common of these dating techniques is cosmogenic radionuclide dating. Earth is constantly bombarded with primary cosmic rays, high energy charged particles – mostly protons and alpha particles. These particles interact with atoms in atmospheric gases, producing a cascade of secondary particles that may in turn interact and reduce their energies in many reactions as they pass through the atmosphere. This cascade includes a small fraction of hadrons, including neutrons. When one of these particles strikes an atom it can dislodge one or more protons and/or neutrons from that atom, producing a different element or a different isotope of the original element. In rock and other materials of similar density, most of the cosmic ray flux is absorbed within the first meter of exposed material in reactions that produce new isotopes called cosmogenic nuclides. At Earth's surface most of these nuclides are produced by neutron spallation. Using certain cosmogenic radionuclides, scientists can date how long a particular surface has been exposed, how long a certain piece of material has been buried, or how quickly a location or drainage basin is eroding. The basic principle is that these radionuclides are produced at a known rate, and also decay at a known rate. Accordingly, by measuring the concentration of these cosmogenic nuclides in a rock sample, and accounting for the flux of the cosmic rays and the half-life of the nuclide, it is possible to estimate how long the sample has been exposed to the cosmic rays. The cumulative flux of cosmic rays at a particular location can be affected by several factors, including elevation, geomagnetic latitude, the varying intensity of the Earth's magnetic field, solar winds, and atmospheric shielding due to air pressure variations. Rates of nuclide production must be estimated in order to date a rock sample. These rates are usually estimated empirically by comparing the concentration of nuclides produced in samples whose ages have been dated by other means, such as radiocarbon dating, thermoluminescence, or optically stimulated luminescence. Cosmogenic radionuclide dating: The excess relative to natural abundance of cosmogenic nuclides in a rock sample is usually measured by means of accelerator mass spectrometry. Cosmogenic nuclides such as these are produced by chains of spallation reactions. The production rate for a particular nuclide is a function of geomagnetic latitude, the amount of sky that can be seen from the point that is sampled, elevation, sample depth, and density of the material in which the sample is embedded. Decay rates are given by the decay constants of the nuclides. These equations can be combined to give the total concentration of cosmogenic radionuclides in a sample as a function of age. The two most frequently measured cosmogenic nuclides are beryllium-10 and aluminum-26. These nuclides are particularly useful to geologists because they are produced when cosmic rays strike oxygen-16 and silicon-28, respectively. The parent isotopes are the most abundant of these elements, and are common in crustal material, whereas the radioactive daughter nuclei are not commonly produced by other processes. As oxygen-16 is also common in the atmosphere, the contribution to the beryllium-10 concentration from material deposited rather than created in situ must be taken into account. 10Be and 26Al are produced when a portion of a quartz crystal (SiO2) is bombarded by a spallation product: oxygen of the quartz is transformed into 10Be and the silicon is transformed into 26Al. Each of these nuclides is produced at a different rate. Both can be used individually to date how long the material has been exposed at the surface. Because there are two radionuclides decaying, the ratio of concentrations of these two nuclides can be used without any other knowledge to determine an age at which the sample was buried past the production depth (typically 2–10 meters). Cosmogenic radionuclide dating: Chlorine-36 nuclides are also measured to date surface rocks. This isotope may be produced by cosmic ray spallation of calcium or potassium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glutaryl-CoA dehydrogenase (non-decarboxylating)** Glutaryl-CoA dehydrogenase (non-decarboxylating): Glutaryl-CoA dehydrogenase (non-decarboxylating) (EC 1.3.99.32, GDHDes, nondecarboxylating glutaryl-coenzyme A dehydrogenase, nondecarboxylating glutaconyl-coenzyme A-forming GDH) is an enzyme with systematic name glutaryl-CoA:acceptor 2,3-oxidoreductase (non-decarboxylating). This enzyme catalyses the following chemical reaction glutaryl-CoA + acceptor ⇌ (E)-glutaconyl-CoA + reduced acceptorThe enzyme contains FAD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Korean numerals** Korean numerals: The Korean language has two regularly used sets of numerals: a native Korean system and Sino-Korean system. The native Korean number system is used for general counting, like counting up to 99. It is also used to count people, hours, objects, ages, and more. Sino-Korean numbers on the other hand are used for purposes such as dates, money, minutes, addresses, phone numbers, and numbers above 100. Construction: For both native and Sino- Korean numerals, the teens (11 through 19) are represented by a combination of tens and the ones places. For instance, 15 would be sib-o (십오; 十五), but not usually il-sib-o in the Sino-Korean system, and yeol-daseot (열다섯) in native Korean. Twenty through ninety are likewise represented in this place-holding manner in the Sino-Korean system, while Native Korean has its own unique set of words, as can be seen in the chart below. The grouping of large numbers in Korean follows the Chinese tradition of myriads (10000) rather than thousands (1000). The Sino-Korean system is nearly entirely based on the Chinese numerals. Construction: The distinction between the two numeral systems is very important. Everything that can be counted will use one of the two systems, but seldom both. Sino-Korean words are sometimes used to mark ordinal usage: yeol beon (열 번) means "ten times" while sip beon (십번; 十番) means "number ten." When denoting the age of a person, one will usually use sal (살) for the native Korean numerals, and se (세; 歲) for Sino-Korean. For example, seumul-daseot sal (스물다섯 살) and i-sib-o se (이십오 세; 二十五 歲) both mean 'twenty-five-year-old'. See also East Asian age reckoning. Construction: The Sino-Korean numerals are used to denote the minute of time. For example, sam-sib-o bun (삼십오 분; 三十五 分) means "__:35" or "thirty-five minutes." The native Korean numerals are used for the hours in the 12-hour system and for the hours 0:00 to 12:00 in the 24-hour system. The hours 13:00 to 24:00 in the 24-hour system are denoted using both the native Korean numerals and the Sino-Korean numerals. For example, se si (세 시) means '03:00' or '3:00 a.m./p.m.' and sip-chil si (십칠 시; 十七 時) or yeol-ilgop si (열일곱 시) means '17:00'. Construction: Some of the native numbers take a different form in front of measure words: The descriptive forms for 1, 2, 3, 4, and 20 are formed by "dropping the last letter" from the original native cardinal, so to speak. Examples: 한 번 han beon ("once") 두 개 du gae ("two things") 세 시 se si ("three o'clock"), in contrast, in North Korea the Sino-Korean numeral 삼 "sam" would normally be used; making it 삼시 "sam si" 네 명 ne myeong ("four people") 스무 마리 seumu mari ("twenty animals")Something similar also occurs in some Sino-Korean cardinals: 오뉴월 onyuwol ("May and June") 유월 yuwol ("June") 시월 siwol ("October")The cardinals for three and four have alternative forms in front of some measure words: 석 달 seok dal ("three months") 넉 잔 neok jan ("four cups")Korean has several words formed with two or three consecutive numbers. Some of them have irregular or alternative forms. Construction: 한둘 handul ("one or two") / 한두 handu ("one or two" in front of measure words) 두셋 duset ("two or three") / 두세 duse ("two or three" in front of measure words) 서넛 seoneot ("three or four") / 서너 seoneo ("three or four" in front of measure words) 두서넛 duseoneot ("two or three or four") / 두서너 duseoneo ("two or three or four" in front of measure words) 너덧 neodeot, 네댓 nedaet, 네다섯 nedaseot, 너더댓 neodeodaet ("four or five") 대여섯 daeyeoseot, 대엿 daeyeot ("five or six") 예닐곱 yenilgop ("six or seven") 일고여덟 ilgoyeodeol, 일여덟 ilyeodeol ("seven or eight") 여덟아홉 yeodeolahop, 엳아홉 yeotahop ("eight or nine")As for counting days in native Korean, another set of unique words are used: 하루 haru ("one day") 이틀 iteul ("two days") 사흘 saheul ("three days") 사나흘 sanaheul, 사날 sanal ("three or four days") 나흘 naheul ("four days") 네댓새 nedaessae, 너댓새 neodaessae, 너더댓새 neodeodaessae, 나달 nadal ("four or five days") 닷새 dassae ("five days") 대엿새 daeyeossae ("five or six days") 엿새 yeossae ("six days") 예니레 yenire ("six or seven days") 이레 ire ("seven days") 일여드레 ilyeodeure ("seven or eight days") 여드레 yeodeure ("eight days") 아흐레 aheure ("nine days") 열흘 yeolheul ("ten days")The native Korean saheul (사흘; three days) is often misunderstood as the Sino-Korean sail (사일; 四日; four days) due to similar sounds. The two words are different in origin and have different meanings. Pronunciation: The initial consonants of measure words and numbers following the native cardinals 여덟 ("eight", only when the ㅂ is not pronounced) and 열 ("ten") become tensed consonants when possible. Thus for example: 열둘 (twelve) is pronounced like [열뚤] 여덟권 (eight (books)) is pronounced like [여덜꿘]Several numerals have long vowels, namely 둘 (two), 셋 (three) and 넷 (four), but these become short when combined with other numerals / nouns (such as in twelve, thirteen, fourteen and so on). Pronunciation: The usual liaison and consonant-tensing rules apply, so for example, 예순여섯 yesun-yeoseot (sixty-six) is pronounced like [예순녀섣] (yesun-nyeoseot) and 칠십 chil-sip (seventy) is pronounced like [칠씹] chil-ssip. Constant suffixes used in Sino-Korean ordinal numerals: Beon (번; 番), ho (호; 號), cha (차; 次), and hoe (회; 回) are always used with Sino-Korean or Arabic ordinal numerals. For example, Yihoseon (이호선; 二號線) is Line Number Two in a metropolitan subway system. Samsipchilbeongukdo (37번국도; 37番國道) is highway number 37. They cannot be used interchangeably. 906호 (號) is 'Apt #906' in a mailing address. 906 without ho (호) is not used in spoken Korean to imply apartment number or office suite number. The special prefix je (제; 第) is usually used in combination with suffixes to designate a specific event in sequential things such as the Olympics. Substitution for disambiguation: In commerce or the financial sector, some hanja for each Sino-Korean numbers are replaced by alternative ones to prevent ambiguity or retouching. For verbally communicating number sequences such as phone numbers, ID numbers, etc., especially over the phone, native Korean numbers for 1 and 2 are sometimes substituted for the Sino-Korean numbers. For example, o-o-o hana-dul-hana-dul (오오오 하나둘하나둘) instead of o-o-o il-i-il-i (오오오 일이일이) for '555-1212', or sa-o-i-hana (사-오-이-하나) instead of sa-o-i-il (사-오-이-일) for '4521', because of the potential confusion between the two similar-sounding Sino-Korean numbers. Substitution for disambiguation: For the same reason, military transmissions are known to use mixed native Korean and Sino-Korean numerals: Notes: Note 1: ^ Korean assimilation rules apply as if the underlying form were 십륙 |sip.ryuk|, giving sim-nyuk instead of the expected sib-yuk. Note 2: ^ ^ ^ ^ ^ These names are considered archaic, and are not used. Note 3: ^ ^ ^ ^ ^ ^ ^ The numbers higher than 1020 (hae) are not usually used. Note 4: ^ ^ ^ ^ The names for these numbers are from Buddhist texts; they are not usually used. Dictionaries sometimes disagree on which numbers the names represent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermoset polymer matrix** Thermoset polymer matrix: A thermoset polymer matrix is a synthetic polymer reinforcement where polymers act as binder or matrix to secure in place incorporated particulates, fibres or other reinforcements. They were first developed for structural applications, such as glass-reinforced plastic radar domes on aircraft and graphite-epoxy payload bay doors on the Space Shuttle. Thermoset polymer matrix: They were first used after World War II, and continuing research has led to an increased range of thermoset resins, polymers or plastics, as well as engineering grade thermoplastics. They were all developed for use in the manufacture of polymer composites with enhanced and longer-term service capabilities. Thermoset polymer matrix technologies also find use in a wide diversity of non-structural industrial applications.The foremost types of thermosetting polymers used in structural composites are benzoxazine resins, bis-maleimide resins (BMI), cyanate ester resins, epoxy (epoxide) resins, phenolic (PF) resins, unsaturated polyester (UP) resins, polyimides, polyurethane (PUR) resins, silicones, and vinyl esters. Benzoxazine resins: These are made by the reaction of phenols, formaldehyde and primary amines which at elevated temperatures (400 °F (200 °C)) undergo ring–opening polymerisation forming polybenzoxazine thermoset networks; when hybridised with epoxy and phenolic resins the resulting ternary systems have glass transition temperatures in excess of 490 °F (250 °C). Cure is characterised by expansion rather than shrinkage and uses include structural prepregs, liquid molding and film adhesives for composite construction, bonding and repair. The high aromatic content of the high molecular weight polymers provides enhanced mechanical and flammability performance compared to epoxy and phenolic resins. Bis-maleimides (BMI): Formed by the condensation reaction of a diamine with maleic anhydride, and processed basically like epoxy resins (350 °F (177 °C) cure). After an elevated post-cure (450 °F (232 °C)), they will exhibit superior properties. These properties are influenced by a 400-450 °F (204-232 °C) continuous use temperature and a glass transition of 500 °F (260 °C). This thermoset polymer type is merged into composites as a prepreg matrix used in electrical printed circuit boards, and for large scale structural aircraft – aerospace composite structures, etc. It is also used as a coating material and as the matrix of glass reinforced pipes, particularly in high temperature and chemical environments. Cyanate ester resins: The reaction of bisphenols or multifunctional phenol novolac resins with cyanogen bromide or chloride leads to cyanate functional monomers which can be converted in a controlled manner into cyanate ester functional prepolymer resins by chain extension or copolymerization. When postcured, all residual cyanate ester functionality polymerises by cyclotrimerisation leading to tightly crosslinked polycyanurate networks with high thermal stability and glass transition temperatures up to 752 °F (400 °C) and wet heat stability up to around 400 °F (200 °C).Cyanate ester resin prepregs combine the high temperature stability of polyimides with the flame and fire resistance of phenolics and are used in the manufacture of aerospace structural composite components which meet fire protection regulations concerning flammability, smoke density and toxicity. Other uses include film adhesives, surfacing films and 3D printing. Epoxy (epoxide) resins: Epoxy resins are thermosetting prepolymers made either by the reaction of epichlorohydrin with hydroxyl functional aromatics, cycloaliphatics and aliphatics or amine functional aromatics, or by the oxidation of unsaturated cycloaliphatics. The diglycidyl ethers of bisphenol-A (DGEBA) and bisphenol-F (DGEBF) are the most widely used due to their characteristic high adhesion, mechanical strength, heat and corrosion resistance. Epoxide functional resins and prepolymers cure by polyaddition/copolymerisation or homopolymerisation depending on the selection of crosslinker, hardener, curing agent or catalyst as well as by the temperature.Epoxy resin is used widely in numerous formulations and forms in the aircraft-aerospace industry. It is regarded as "the work-horse of modern day composites". In recent years, the epoxy formulations used in composite prepregs have been fine-tuned to improve their toughness, impact strength and moisture absorption resistance. Maximum properties have been realized for this polymer. Epoxy (epoxide) resins: This is not only used in aircraft-aerospace demand. It is used in military and commercial applications and is also used in construction. Epoxy-reinforced concrete and glass-reinforced and carbon-reinforced epoxy structures are used in building and bridge structures. Epoxy (epoxide) resins: Epoxy composites have the following properties: High-Strength Glass Fiber Reinforced Relative Density 1.6-2.0 Melting temperature(°C) Thermoset Processing Range(°F) C:300-330,I=280-380 Molding pressure 1-5 Shrinkage 0.001-0.008 Tensile strength (p.s.i.) 5,000-20,000 Compressive strength (p.s.i.) 18,000-40,000 Flexural Strength (p.s.i.) 8000-30,000 Izod impact strength (ft·lb/in) 0.3-10.0 Linear expansion (10−6 in./in./°C) 11-50 Hardness Rockwell M100-112 Flammability V-0 Water absorption 24h (%) 0.04-0.20 Epoxy Phenol Novolac (EPN) and Epoxy Cresol Novolac (ECN) resins made by reacting epichlorohydrin with multifunctional phenol novolac or cresol novolac resins have more reactive sites compared to DGEBF epoxy resins and on cure result in higher crosslink density thermosets. They are used in printed wire/circuit board laminating and also for electrical encapsulation, adhesive and coatings for metal where there is a need to provide protection from corrosion, erosion or chemical attack at high continuous operating temperatures. Phenolic (PF) resins: There are two types of phenolic resins - novolacs and resoles. Novolacs are made with acid catalysts and a molar ratio of formaldehyde to phenol of less than one to give methylene linked phenolic oligomers; resoles are made with alkali catalysts and a molar ratio of formaldehyde to phenol of greater than one to give phenolic oligomers with methylene and benzylic ether-linked phenol units. Phenolic resins, originally developed in the late 19th century and, regarded as the first truly synthetic polymer types, are often referred to as the “work-horse of thermosetting resins”. They are characterised by high bonding strength, dimensional stability and creep resistance at elevated temperatures, and frequently combined with co-curing resins such as epoxies. Phenolic (PF) resins: General purpose molding compounds, engineering molding compounds and sheet molding compounds are the primary forms of phenolic composites. Phenolics are also used as the matrix binder with Honeycomb core. Phenolics find use in many electrical applications such as breaker boxes, brake lining materials and most recently in combination with various reinforcements in the molding of an engine block-head assembly, called the polimotor. Phenolics may be processed by the various common techniques, including compression, transfer and injection molding. Phenolic (PF) resins: Properties of phenolic composites have the following properties: High-Strength Glass Fiber Reinforced Relative Density 1.69-2.0 Water Absorption 24h(%) 0.03-1.2 Melting Temperature (◦c) Thermo set Processing Range (◦F) C:300-380 I:330-390 Molding pressure I-20 Shrinkage 0.001-0.004 Tensile Strength (p.s.i.) 7000-18000 Compressive Strength (p.s.i.) 16,000-70,000 Flexural Strength (p.s.i.)12,000-60,000 Izod Impact strength (ft-lb/in) 0.5-18.0 Linear expansion (10−6 in./in./°C) 8-21 Hardness Rockwell E54-101 Flammability V-0 Polyester resins: Unsaturated polyester resins are an extremely versatile, and fairly inexpensive class of thermosetting polymer formed by the polycondensation of glycol mixtures often containing propylene glycol, with a dibasic acid and anhydrides usually maleic anhydride to provide backbone unsaturation needed for crosslinking, and phthalic anhydride, isophthalic acid or terephthalic acid where superior structural and corrosion resistance properties are required. Polyester resins are routinely diluted/dissolved in a vinyl functional monomer such as styrene and include an inhibitor to stabilize the resin for storage purposes. Polymerisation in service is initiated by free radicals generated from ionizing radiation or by the photolytic or thermal decomposition of a radical initiator. Organic peroxides, such as methyl ethyl ketone peroxide and auxiliary accelerators which promote decomposition to form radicals are combined with the resin to initiate a room temperature cure. In the liquid state, unsaturated polyester resins may be processed by numerous methods, including Hand Layup, vacuum bag molding, and spray-up and compression molded Sheet Molding Compound (SMC). They can also be B-staged after application to chopped reinforcement and continuous reinforcement, to form pre-pregs. Solid molding compounds in the form of pellets or granules are also used in processes such as compression and transfer molding. Polyimides: There are two types of commercial polyimides: thermosetting cross-linkable polyimides made by the condensation of aromatic diamines with aromatic dianhydride derivatives and anhydrides with unsaturated sites that facilitate addition polymerisation between preformed imide monomers and oligomers, and thermoplastic polyimides formed by the condensation reaction between aromatic diamines and aromatic dianhydrides. Thermoset polyimides are the most advanced of all thermoset polymer matrices with characteristics of high temperature physical and mechanical properties and are available commercially as resin, prepreg, stock shapes, thin sheets/films, laminates, and machined parts. Along with the high temperature properties, this thermoset polymer type must be processed at very high temperatures and relative pressure to produce optimum characteristics. With prepreg materials, 600 °F (316 °C) to 650 °F (343 °C) temperatures and 200 psi (1,379 kPa) pressures are required. The entire cure profiles are inherently long as there are a number of intermediate temperatures dwells, duration of which are dependent on part size and thickness. The cut of polyimides is 450 °F (232 °C), highest of all thermosets, with short term exposure capabilities of 900 °F (482 °C). Normal operating temperatures range from cryogenic to 500 °F (260 °C). Polyimides: Polyimide composites have the following properties: Good mechanical properties and retention at high temperatures Good electrical properties High wear resistance Low creep at high temperatures Good compression with glass or graphite fiber reinforcement Good chemical resistance Inherently flame resistant Unaffected by most solvents and oilsPolyimide film possesses a unique combination of properties that make it ideal for a variety of applications in many different industries especially as excellent physical, electrical, and mechanical properties are maintained over a wide temperature range.High-performance polyimide resin is used in electrical, wear resistant and as structural materials when combined with reinforcement for aircraft-aerospace applications, which are replacing heavier more expensive metals. High temperature processing causes some technical problems as well as higher costs compared to other polymers. Hysols PMR series is an example of this polymer. Polyurethane (PUR) resins: Thermoset polyurethane prepolymers with carbamate (-NH-CO-O-) links are linear and elastomeric if formed by combining diisocyanates (OCN-R1-NCO) with long chain diols (HO-R2-OH), or crosslinked and rigid if formed from combinations of polyisocyanates and, polyols. They can be solid or have an open cellular structure if foamed, and are widely used for their characteristic high adhesion and resistance to fatigue. Polyurethane foam structural cores combined with glass-reinforced or graphite-reinforced composite laminates are used to make lightweight, strong, sandwich structures. All forms of the material, inclusive of flexible and rigid foams, foam moldings, solid elastomeric moldings and extrudates, when combined with various reinforcement–fillers have found commercial applications in thermoset polymer matrix composites.They differ from polyureas which are thermoset elastomeric polymers with carbamide (-NH-CO-NH-) links made by combining diisocyanate monomers or prepolymers (OCN-R-NCO) with blends of long-chain amine-terminated polyether or polyester resins (H2N-RL-NH2) and short-chain diamine extenders (H2N-RS-NH2). Polyureas are characterised by near instantaneous cure, high mechanical strength and resistance to corrosion so are widely used for 1:1 volume mix ratio spray applied, abrasion resistant waterproofing protective coating and lining. Silicone resins: Silicone resins are partly organic in nature with a backbone polymer structure made of alternating silicon and oxygen atoms rather than the familiar carbon-to-carbon backbone characteristics of organic polymers. In addition to having at least one oxygen atom bonded to each silicon atom, silicone resins have direct bonds to carbon and therefore also known as polyorganosiloxanes. They have the general formula (R2SiO)n and the physical form (liquid, gel, elastomer or solid) and use varies with molecular weight, structure (linear, branched, caged) and nature of substituent groups (R = alkyl, aryl, H, OH, alkoxy). Aryl substituted silicone resins have greater thermal stability than alkyl substituted silicone resins when polymerised (condensation cure mechanism) at temperatures between ~300 °F (~150 °C) and ~400 °F (~200 °C). Heating above ~600 °F (~ 300 °C) converts all silicone polymers into ceramics since all organic constituents pyrolytically decompose leaving crystalline silicate polymers with the general formula (-SiO2-)n. In addition to applications as ceramic matrix composite precursors, silicone resins in the form of polysiloxane polymers made from silicone resins with pendant acrylate, vinyl ether or epoxy functionality find application as UV, electron beam and thermoset polymer matrix composites where they are characterised by their resistance to oxidation, heat and ultraviolet degradation. Silicone resins: Assorted other uses in the general area of composites for silicones include sealants, coating materials, and as a reusable bag material for vacuum-bag curing of composite parts. Vinyl ester resins: Vinyl ester resins made by addition reactions between an epoxy resin with acrylic acid derivatives, when diluted/dissolved in a vinyl functional monomer such as styrene, polymerise. The resulting thermosets are notable for their high adhesion, heat resistance and corrosion resistance. They are stronger than polyesters and more resistant to impact than epoxies. Vinyl ester resins are used for wet lay-up laminating, SMC and BMC in the manufacture and repair of corrosion and heat resistant components ranging from pipelines, vessels and buildings to transportation, marine, military and aerospace applications. Miscellaneous: Amino resins are another class of thermoset prepolymers formed by copolymerisation of amines or amides with an aldehyde. Urea-formaldehyde and melamine-formaldehyde resins, although not widely used in high performance structural composite applications, are characteristically used as the polymer matrix in molding and extrusion compounds where some use of fillers and reinforcements occurs. Urea-formaldehyde resins are widely used as the matrix binder in construction utility products such as particle board, wafer board, and plywood, which are true particulate and laminar composite structures. Melamine-formaldehyde resins are used for plastic laminating. Furan resin prepolymers made from furfuryl alcohol, or by modification of furfural with phenol, formaldehyde (methanal), urea or other extenders, are similar to amino and phenolic thermosetting resins in that cure involves polycondensation and release of water as well as heat. While they are generally cured under the influence of heat, catalysts and pressure, furan resins can also be formulated as dual-component no-bake acid-hardened systems which are characterised by high resistance to heat, acids and alkalies. Furan resins are of increasing interest for the manufacture of sustainable composites - biocomposites made from a bio-derived matrix (in this case furan resin), or biofibre reinforcement, or both. Advantages and disadvantages: Advantages Well established processing and application history Overall, better economics than thermoplastic polymers Better high temperature properties Good wetting and adhesion to reinforcement Disadvantages Resins and composite materials must be refrigerated Moisture absorption and subsequent property degradation Long process cycles Reduced impact –toughness Poor recycling capabilities More difficult repair ability
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**7-Chloro-L-tryptophan oxidase** 7-Chloro-L-tryptophan oxidase: 7-Chloro-L-tryptophan oxidase (EC 1.4.3.23, RebO) is an enzyme with systematic name 7-chloro-L-tryptophan:oxygen oxidoreductase. This enzyme catalyses the following chemical reaction 7-chloro-L-tryptophan + O2 ⇌ 2-imino-3-(7-chloroindol-3-yl)propanoate + H2O2This enzyme contains a noncovalently bound FAD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Random man not excluded** Random man not excluded: Random man not excluded (RMNE) is a type of measure in population genetics to estimate the probability that an individual randomly picked out of the general population would not be excluded from matching a given piece of genetic data. RMNE is frequently employed in cases where other types of tests such as random match possibility are not possible because the sample in question is degraded or contaminated with multiple sources of DNA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dark Energy Spectroscopic Instrument** Dark Energy Spectroscopic Instrument: The Dark Energy Spectroscopic Instrument (DESI) is a scientific research instrument for conducting spectrographic astronomical surveys of distant galaxies. Its main components are a focal plane containing 5,000 fiber-positioning robots, and a bank of spectrographs which are fed by the fibers. The new instrument will enable an experiment to probe the expansion history of the universe and the mysterious physics of dark energy.The instrument is operated by the Lawrence Berkeley National Laboratory under funding from the US Department of Energy's Office of Science. Construction of the new instrument, now completed, was principally funded by the US Department of Energy's Office of Science, and by other numerous sources including the US National Science Foundation, the UK Science and Technology Facilities Council, France's Alternative Energies and Atomic Energy Commission, Mexico's National Council of Science and Technology, Spain's Ministry of Science and Innovation, by the Gordon and Betty Moore Foundation, by the Heising-Simons Foundation, and by collaborating institutions worldwide. DESI sits at an elevation of 6,880 feet (2,100 m), where it has been retrofitted onto the Mayall Telescope on top of Kitt Peak in the Sonoran Desert, which is located 55 miles (89 km) from Tucson, Arizona, US. Science goals: The expansion history and large-scale structure of the universe is a key prediction of cosmological models, and DESI observations will permit scientists to probe diverse aspects of cosmology, from dark energy to alternatives to General Relativity to neutrino masses to the early universe. The data from DESI will be used to create three-dimensional maps of the distribution of matter covering an unprecedented volume of the universe with unparalleled detail. This will provide insight into the nature of dark energy and establish whether cosmic acceleration is due to a cosmic-scale modification of General Relativity. DESI will be transformative in the understanding of dark energy and the expansion rate of the universe at early times, one of the greatest mysteries in the understanding of the physical laws. Science goals: DESI will measure the expansion history of the universe using the baryon acoustic oscillations (BAO) imprinted in the clustering of galaxies, quasars, and the intergalactic medium. The BAO technique is a robust way to extract cosmological distance information from the clustering of matter and galaxies. It relies only on very large-scale structure and it does so in a manner that enables scientists to separate the acoustic peak of the BAO signature from uncertainties in most systematic errors in the data. BAO was identified in the 2006 Dark Energy Task Force report as one of the key methods for studying dark energy. In May 2014, the High-Energy Physics Advisory Panel, a federal advisory committee, commissioned by the US Department of Energy (DOE) and the National Science Foundation (NSF) endorsed DESI. 3D map of the universe: The baryon acoustic oscillations method requires a three-dimensional map of distant galaxies and quasars created from the angular and redshift information of a large statistical sample of cosmologically distant objects. By obtaining spectra of distant galaxies it is possible to determine their distance, via the measurement of their spectroscopic redshift, and thus create a 3-D map of the universe. The 3-D map of the large-scale structure of the universe also contains more information about dark energy than just the BAO and is sensitive to the mass of the neutrino and parameters that governed the primordial universe. During its five-year survey, which began on May 15, 2021, the DESI experiment is expected to observe 40 million galaxies and quasars. Development: The DESI instrument implements a new highly multiplexed optical spectrograph on the Mayall Telescope. The new optical corrector design creates a very large, 8.0 square degree field of view on the sky, which combined with the new focal plane instrumentation weighs approximately 10 tonnes. The focal plane accommodates 5,000 small computer controlled fiber positioners on a 10.4 millimeter pitch. The entire focal plane can be reconfigured for the next exposure in less than two minutes while the telescope slews to the next field. The DESI instrument is capable of taking 5,000 simultaneous spectra over a wavelength range from 360 nm to 980 nm. The DESI project scope included construction, installation, and commissioning of the new wide-field corrector and corrector support structure for the telescope, the focal plane assembly with 5,000 robotic fiber positioners and ten guide/focus/alignment sensors, a 40-meter optical fiber cabling system that brings light from the focal plane to the spectrographs, ten 3-arm spectrographs, an instrument control system, and a data analysis pipeline. Development: The instrument fabrication was managed by the Lawrence Berkeley National Laboratory and oversees operation of the experiment including a 600-person international scientific collaboration. Cost of construction was $56M from the US Department of Energy's Office of Science plus an additional $19M from other non-federal sources including contributions in-kind. The leadership of DESI currently consists of the director, Dr. Michael E. Levi, collaboration co-spokespersons Prof. Kyle Dawson and Dr. Nathalie Palanque-Delabrouille, project scientists Dr. David J. Schlegel and Dr. Julien Guy, project manager Dr. Patrick Jelinsky, instrument scientists Prof. Klaus Honscheid and Prof. Constance Rockosi. Past collaboration spokespersons have been Prof. Daniel Eisenstein and Prof. Risa Wechsler. Development: DOE approved CD-0 (Mission Need) on September 18, 2012, approved CD-1 (Alternative Selection and Cost Range) on March 19, 2015, and CD-2 (Performance Baseline) on September 17, 2015. Congressional approval for the start of DESI as a new Major Item of Equipment was provided in the FY15 Energy & Water appropriations legislation. Construction on the new instrument started June 22, 2016 with CD-3 (Start Construction) approval and was largely assembled by 2019 with commissioning finishing in March 21, 2020 in advance of the pandemic and marking the formal end of the project (CD-4). DESI was completed under budget by $1.9M and 17 months ahead of schedule. As a consequence, the project received the DOE Project Management Excellence Award for 2020. After a pause for the pandemic and a transition to remote operations, DESI returned to survey operations in December, 2020 with a final checkout and validation phase prior to starting its planned five-year survey. The five-year survey began on May 14, 2021. DESI was shutdown for three months in the summer of 2022 due to the Contreras fire which engulfed Kitt Peak. DESI was undamaged and is acquiring scientific data. DESI Legacy Imaging Surveys: To provide targets for the DESI survey three telescopes surveyed the northern and part of the southern sky in the g, r and z-band. Those surveys were the Beijing-Arizona Sky Survey (BASS), using the Bok 2.3-m telescope, the Dark Energy Camera Legacy Survey (DECaLS), using the Blanco 4m telescope and the Mayall z-band Legacy Survey (MzLS), using the 4-meter Mayall telescope. The area of the surveys is 14,000 square degrees (about one third of the sky) and avoids the Milky Way. These surveys were combined into the DESI Legacy Imaging Surveys, or Legacy Surveys. Colored images of the survey can be viewed in the Legacy Survey Sky Browser. The legacy survey covers 16,000 square degrees of the night sky containing 1.6 billion objects including galaxies and quasars out to 11 billion years ago. History: DESI received a go-ahead to start R&D for the project in December 2012 with the assignment of the Lawrence Berkeley National Laboratory as the managing laboratory. Dr. Michael Levi, a senior scientist at the Lawrence Berkeley National Laboratory was appointed by the laboratory to be DESI's project director who served in that role starting in 2012 and throughout construction. Henry Heetderks was project manager from 2013 until 2016, Robert Besuner was project manager from 2016 until 2020. Congressional authorization was provided in 2015, and the US Department of Energy's Office of Science approved the start of physical construction in June 2016. First light of the new corrector system was obtained on the night of April 1, 2019, and first-light of the entire instrument was achieved on the night of October 22, 2019. Commissioning ensued after first light and was completed in March, 2020, then paused during the pandemic in 2020. DESI started its 5-year main scientific survey on May 14, 2021. DESI is currently operating normally after surviving the Contreras fire in 2022. Data Releases: The main landing page to access the DESI data can be found here: https://data.desi.lbl.gov/doc/ All of the publicly available data including redshift catalogs, added-value catalogs, and documentation, can be accessed through this portal. Individuals with accounts at the National Energy Research Scientific Computing Center (NERSC) can access the entire public portion of the DESI data. DESI catalogs also exist in a database format. For convenience, a copy of the public databases is also hosted by the NOIRLab Astro Data Lab science platform, and by using the SPectral Analysis and Retrievable Catalog Lab (SPARCL). One easy way to access DESI spectra online is to use the legacy viewer at https://www.legacysurvey.org/. Users have to check the box for DESI spectra and click on an encircled galaxy or star for a link to the DESI Spectral Viewer to show up. The spectrum can be explored in the DESI Spectral Viewer. Data Releases: Early Data Release On 13 June 2023 the DESI Early Data Release (EDR) was announced. The EDR contains spectra of nearly two million galaxies, quasars and stars. One early result of the EDR was announced in February 2023 and described a mass migration of stars into the Andromeda Galaxy. The EDR also revealed very distant quasars and very metal-poor stars.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dientamoebiasis** Dientamoebiasis: Dientamoebiasis is a medical condition caused by infection with Dientamoeba fragilis, a single-cell parasite that infects the lower gastrointestinal tract of humans. It is an important cause of traveler's diarrhea, chronic abdominal pain, chronic fatigue, and failure to thrive in children. Signs and symptoms: The most commonly reported symptoms in conjunction with infection with D. fragilis include abdominal pain (69%) and diarrhea (61%). Diarrhea may be intermittent and may not be present in all cases. It is often chronic, lasting over two weeks. The degree of symptoms may vary from asymptomatic to severe, and can include weight loss, vomiting, fever, and involvement of other digestive organs. Signs and symptoms: Symptoms may be more severe in children. Additional symptoms reported have included: Weight loss Fatigue Nausea and vomiting Fever Urticaria (skin rash) Pruritus (itchiness) Biliary infection Cause: Genetic diversity As many individuals are asymptomatic carriers of D. fragilis, pathogenic and nonpathogenic variants are proposed to exist. A study of D. fragilis isolates from 60 individuals with symptomatic infection in Sydney, Australia, found all were infected with the same genotype, which is the most common worldwide, but differed from the genotype first described from a North American isolate and later also detected in Europe. Cause: Transmission Organisms similar to D. fragilis are known to produce a cyst stage that is able to survive outside the host and facilitate infection of new hosts. However, the exact manner in which it is transmitted is not yet known, as the organism is unable to survive outside its human host for more than a few hours after excretion, and no cyst stage has been found.Early theories of transmission suggested D. fragilis was unable to produce a cyst stage in infected humans, but some animal existed that in which it did produce a cyst stage, and this animal was responsible for spreading it. However, no such animal has ever been discovered. A later theory suggested the organism was transmitted by pinworms, which provided protection for the parasite outside the host. DNA has been detected in surface-sterilized eggs of Enterobius vermicularis eggs, thus suggesting the latter may harbor the former. Experimental ingestion of pinworm eggs established infection in two investigators. Numerous studies reported high rates of coinfection with helminthes. However, recent study has failed to show any association between D. fragilis infection and pinworm infection. Parasites similar to D. fragilis are transmitted by consuming water or food contaminated with feces. The high rate (40%) of concomitant infection with other protozoa reported by at St. Vincent's Hospital, Sydney, Australia, supports the oral-fecal route of transmission. Diagnosis: Diagnosis is usually performed by submitting multiple stool samples for examination by a parasitologist in a procedure known as an ova and parasite examination. About 30% of children with D. fragilis infection exhibit peripheral blood eosinophilia.A minimum of three stool specimens having been immediately fixed in polyvinyl alcohol fixative, sodium acetate-acetic acid-formalin fixative, or Schaudinn's fixative should be submitted, as the protozoan does not remain morphologically identifiable for long. All specimens, regardless of consistency, are permanently stained prior to microscopic examination with an oil immersion lens. The disease may remain cryptic due to the lack of a cyst stage if these recommendations are not followed.The trophozoite forms have been recovered from formed stool, thus the need to perform the ova and parasite examination on specimens other than liquid or soft stools. DNA fragment analysis provides excellent sensitivity and specificity when compared to microscopy for the detection of D. fragilis and both methods should be employed in laboratories with PCR capability. The most sensitive detection method is parasite culture, and the culture medium requires the addition of rice starch.An indirect fluorescent antibody (IFA) for fixed stool specimens has been developed. Diagnosis: One researcher investigated the phenomenon of symptomatic relapse following treatment of infection with D. fragilis in association with its apparent disappearance from stool samples. The organism could still be detected in patients through colonoscopy or by examining stool samples taken in conjunction with a saline laxative. A study found that trichrome staining, a traditional method for identification, had a sensitivity of 36% (9/25) when compared to stool culture. An additional study found that the sensitivity of staining was 50% (2/4), and that the organism could be successfully cultured in stool specimens up to 12-hours old that were kept at room temperature. Treatment: Concomitant pinworm infection should also be excluded, although the association has not been proven. Successful treatment of the infection with iodoquinol, doxycycline, metronidazole, paromomycin, and secnidazole has been reported. Resistance requires the use of combination therapy to eradicate the organism. All persons living in the same residence should be screened for D. fragilis, as asymptomatic carriers may provide a source of repeated infection. Paromomycin is an effective prophylactic for travellers who will encounter poor sanitation and unsafe drinking water. Epidemiology: Rates of infection increase in conditions of crowding and poor sanitation, and are higher in military personnel and mental institutions. Epidemiology: The true extent of disease has yet to emerge, as most laboratories do not use techniques to adequately identify this organism. An Australian study identified a large number of patients, considered to have irritable bowel syndrome, who were actually infected with Dientamoeba fragilis.Although D. fragilis has been described as an infection "emerging from obscurity", it has become one of the most prevalent gastrointestinal infections in industrialized countries, especially among children and young adults. A Canadian study reported a prevalence of around 10% in boys and girls aged 11–15 years, a prevalence of 11.5% in individuals aged 16–20, and a lower incidence of 0.3–1.9% in individuals over age 20. History: Early microbiologists reported that the organism was not pathogenic, though six of the seven individuals from whom they isolated it were experiencing symptoms of dysentery. Their report, published in 1918, concluded the organism was not pathogenic because it consumed bacteria in culture, but did not appear to engulf red blood cells, as was seen in the best-known disease-causing amoeba of the time, Entamoeba histolytica. This initial report may still be contributing to the reluctance of physicians to diagnose the infection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bush regeneration** Bush regeneration: Bush regeneration, a form of natural area restoration, is the term used in Australia for the ecological restoration of remnant vegetation areas, such as through the minimisation of negative disturbances, both exogenous such as exotic weeds and endogenous such as erosion. It may also attempt to recreate conditions of pre-European arrival, for example by simulating endogenous disturbances such as fire. Bush regeneration attempts to protect and enhance the floral biodiversity in an area by providing conditions conducive to the recruitment and survival of native plants. History: Bradley method In the early 1960s Joan and Eileen Bradley developed a series of weed control techniques through a process of trial and error. Their work was the beginning of minimal disturbance bush regeneration in New South Wales. The Bradley method urges a naturalistic approach by encouraging the native vegetation to self-reestablish. The Bradleys used their method to successfully clear weeds from a 16 hectares (40 acres) reserve in Ashton Park, part of Sydney Harbour National Park, NSW. The process demonstrated that, following a period of consecutive 'follow up' treatments of diminishing time requirement, subsequent maintenance was needed only once or twice a year, mainly in vulnerable spots such as creek banks, roadsides, and clearings, to be maintained weed-free. History: The aim of their work was to clear small niches adjacent to healthy native vegetation such that the each area will regenerate from in-situ soil seed banks or be re-colonised and stabilised by the regeneration of native plants, replacing an area previously occupied by weeds. The Bradley method follows three main principles, secure the best areas first minimise disturbance to the natural conditions (e.g. minimise soil disturbance and off-target damage). History: don't overclear, let the regenerative ability of the bush set the pace of clearance (Bradley 1988).The priority securing of the best quality vegetation aids in preserving areas of top biodiversity which provide regeneration potential to expand these areas and reclaim areas as bushland. History: Modern practice The adoption of minimal disturbance bush regeneration increased in the decades that followed the work of the Bradleys. Their principles have guided bushcare programs in Australia, although the inclusion of herbicide in modern bush regeneration is a notable deviation from the ideals of the Bradley sisters. In addition, rather than 'minimal disturbance', a more favoured and ecologically sound trend since the 1990s has been towards more 'appropriate disturbance' as many Australian plant communities require some level of perturbation to trigger germination from long-buried seed banks. This has led to a range of additional disturbance-based techniques (such as burns and soil disturbance) being included in the regenerator's 'tool kit' in dry forest and grassland areas. Field experience has found that, even in rainforest areas, a resilience to disturbance is evident, enabling regenerators to clear weed in a fairly extensive manner to trigger rainforest recovery. This is borne out by a thriving rainforest regeneration industry in northern NSW Australia, modelled on the pioneering work of John Stockard at Wingham Brush (Stockard 1991, Stockard 1999). History: The rule of thumb in all cases is to constrain clearing to that area that matches the project's follow up resources. History: The increased awareness and consideration of Australia's biodiversity by citizens has incrementally increased pressure on local governments to adopt conservation programs for remnant vegetation on council land. Most peri urban councils now have some involvement in bush regeneration, either through planning, land management, volunteer support or through employment of bush regeneration practitioners. In NSW the level of coordination of bush regeneration programs through local governments is high, although in some other areas at present a lack of coordination is a serious concern in bush regeneration on public land, with only 40% of councils liaising with other councils. In such areas there may be a need for strategic management at a regional scale through Natural Resource Management Boards or non government organisations such as Trees For Life, which are involved in bushcare programs across wider areas. There is increasing interest in using species traits and the grouping of species by their traits into functional types to both predict plant community responses to environmental change and to address hypothesis about the mechanisms underlying these responses. Purposes: The aim of bush regeneration, also known as 'natural area restoration', is to restore and maintain ecosystem health by facilitating the natural regeneration of indigenous flora, this is usually achieved by selectively reducing the competitive interaction with invasive species, or mitigation of negative influences such as weeds or erosion. See also Albert Morris. Purposes: Invasive plant species are often the greatest threat to remnant vegetation, and therefore bush regeneration is closely associated with weed abatement activities. Weed management as one aim of bush regeneration, is used to increase native plant recruitment. The management of factors such as fire and herbivory can be just as important, depending on the ecosystem being restored. In recent years research and on-ground management has begun to recognize the importance of ecosystem processes rather ecosystem composition and structure and research into other ways of facilitating native plant recruitment is increasing. Technique: The original Bradley method of bush regeneration focuses on facilitating native plant recruitment from the seedbank, rather than planting seedlings or sowing seeds, as follows: Weeding a little at a time from the bush towards the weeds takes the pressure off the natives under favourable conditions. Native seeds and spores are ready in the ground and the natural environment favours plants that have evolved in it. The balance is tipped back towards regeneration. Keep it that way, by always working where the strongest area of bush meets the weakest weeds. Technique: Currently the term 'bush regeneration' includes activities other than weed removal, such as replanting and introducing species into an area where soil, water, or fire regimes have shifted the type of plant appropriate to the area (e.g. a stormwater drain).Weed species can be important habitat for native fauna (e.g. blackberry is important habitat for wrens and the southern brown bandicoot) and this should be taken into consideration with bush regeneration, for example by not clearing invasive species until adequate habitat alternatives have been established nearby with native vegetation. Technique: Problems can occur when insufficient follow-up is conducted as the success of bush regeneration is dependent on allowing the native vegetation to regenerate in the area where weeds have been removed. List of bushcare groups: Organisations offering community training in bush regeneration Trees For Life Campbelltown Council, NSW, Streamcare Group San Diego Chapter, California Native Plant Society Wild Mountains Trust, Queensland, Australia Reserves where volunteer groups undertake bush regeneration George Kendall Riverside Park Whites Creek (Annandale) Puckeys Estate Reserve Mermaid Pool, Manly Vale Sydney Further references: Brock, Thomas D. (October 2002). "The Bradley Method for Control of Invasive Plants" (PDF). Plants out of Place (pp 5-6). Invasive Plants Association of Wisconsin. Archived from the original (PDF) on 2004-04-05. Retrieved 2006-09-06. Fuller, T.C.; G.D. Barbe (Fall 1997). "The Bradley Method of Eliminating Exotic Plants From Natural Reserves" (PDF). CALEPPC News (pp 7-8). California Exotic Pest Plant Council. Retrieved 2006-09-06.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fluotracen** Fluotracen: Fluotracen (SKF-28,175) is a tricyclic drug which has both antidepressant and antipsychotic activity. This profile of effects is similar to that of related agents like amoxapine, loxapine, and trimipramine which may also be used in the treatment of both depression and psychosis. It was believed that such duality would be advantageous in the treatment of schizophrenia, as depression is often comorbid with the disorder and usual antipsychotics often worsen such symptoms. In any case, however, fluotracen was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chrome Engine** Chrome Engine: Chrome Engine is a proprietary 3D game engine developed by Techland. The current version, Chrome Engine 6, supports Mac OS X, Linux, Xbox One, PlayStation 4, Xbox 360, PlayStation 3 and Microsoft Windows. Chrome Engine evolved through over nine years of development. According to its creators the engine allows substantial control over the process of creating game levels. Versions: Chrome Engine 1 First release of the engine used in Chrome. Chrome Engine 2 Improved version of engine enhanced with support for DirectX 9.0. Chrome Engine 3 This version of the engine underwent significant modifications. DirectX 9.0c and DirectX 10 support, HDR, shaders and bump mapping were implemented. Chrome Engine 4 The fourth iteration of the Chrome Engine that was introduced with Call of Juarez: Bound in Blood. Supports DirectX 9 only. Chrome Engine 5 This version debuted with Call of Juarez: The Cartel, Call of Juarez: Gunslinger and Dead Island. This version was primarily used between 2011-2013. Chrome Engine 6 Version used since 2013 to develop Dying Light and DLC for Dying Light Hellraid. Games using Chrome Engine: Chrome Engine 1 Pet Racer (2002) FIM Speedway Grand Prix (2003) Chrome (2003) Chrome: SpecForce (2005) Crazy Soccer Mundial (2006) Chrome Engine 2 Xpand Rally (2004) Xpand Rally Xtreme (2006) Terrorist Takedown: War in Colombia (2006) Terrorist Takedown: Covert Ops (2006) GTI Racing (2006) FIM Speedway Grand Prix 2 (2006) Expedition Trophy: Murmansk Vladivostok (2006) UAZ 4X4 Racing (2007) Full Drive: UAZ 4x4 – Ural Appeal (2007) Classic Car Racing (2007) Code of Honor: The French Foreign Legion (2007) Full Drive 2: UAZ 4x4 (2008) 4x4: Hummer (2008) Full Drive 2: Daurian Marathon (2008) Full Drive 2: Siberian Appeal (2008) Battlestrike: Force of Resistance (2008) Sniper: Art of Victory (2008) Nikita Tajemnica Skarbu Piratów (2008) GM Rally (2009) KrAZ (2010) Full Drive 2: Trophy Murmansk - Vladivostok 2 (2010) Warhound (project suspended) Chrome Engine 3 Call of Juarez (2006) FIM Speedway Grand Prix 3 (2008) Speedway Liga (2009) FIM Speedway Grand Prix 4 (2011) Chrome Engine 4 Call of Juarez: Bound in Blood (2009) Sniper: Ghost Warrior (2010) Nail'd (2010) Mad Riders (2012) Chrome Engine 5 Call of Juarez: The Cartel (2011) Dead Island (2011) Dead Island: Riptide (2013) Call of Juarez: Gunslinger (2013) Chrome Engine 6 Dying Light (2015) Dead Island Definitive Edition (2016) Dead Island: Riptide Definitive Edition (2016) Hellraid (2020)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volkswagen ID. Vizzion** Volkswagen ID. Vizzion: The Volkswagen ID. Vizzion is a concept electric vehicle, part of Volkswagen's ID. project. The firm is planning to sell a production version of the vehicle by 2022. It should be expected that some of the claims of the Vizzion Concept, such as the autonomous level 5 driving, will very likely not make it the final 2022 production version of the vehicle. The Vizzion Concept is more likely to represent what driving in the year 2030 could be like. Technical statistics: The Vizzion Concept has an all-wheel drive system, powered by the electric motors on the front and back of the vehicle for a combined total of 302 hp. The concept has a 111-kWh battery, making the Vizzion good for 300–400 miles (480–640 km). The car would be equipped with sensors that aid in the user experience (such as a 360 degree camera and a facial recognition system) as well as allow the car to drive autonomously. The headlights include 8,000 HD Matrix LEDs, allowing the car to "communicate with the outside world by projecting the image of crosswalk lines in front of the I.D. VIZZION to let pedestrians know they can safely pass in front." Design: The Vizzion Concept's design was aimed to eliminate any seams or obstructions for passengers. It features rear-opening doors and windows that lie flush with the body-work of the car, thus giving the car a futuristic look. The vehicle also features several LED strips that surround the car, highlighting the front grille, the door handles, rear fenders and the sides. The lights on the car start pulsating to let the user know that the car has detected one of its passengers' faces using integrated sensors. The interior features only two analog dials: they are multi-functional, but are primarily for adjusting the stereo. The majority of car functions are controlled through hand gestures in the heads-up display and voice controls. The car features no steering wheel or any dashboard, as the car is level 5 autonomous.Volkswagen claims that the technology for the car is "much closer than you think", either already existing or under development by Volkswagen. Vizzion production vehicle (2023-2024): Little is known about details and design the production version of the vehicle. The majority of information comes from official Volkswagen statements. Volkswagen says that they are aiming to start production of a toned down, more production ready version of the Vizzion in 2022, and that it will not include level 5 autonomous driving. Volkswagen also confirms that they are following through with the 302 hp, 111-kWh battery, and the 400 mile range figures of their original concept. In popular culture: In the 2023 French animated film Ladybug & Cat Noir: The Movie, Gabriel Agreste drive a ID. Vizzion, following a deal between Volkswagen and French company ZAG Inc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**World English-Language Scrabble Players' Association** World English-Language Scrabble Players' Association: The World English-Language Scrabble Players' Association (WESPA) is the overarching global body for English-language national Scrabble associations and similar entities. Formation: WESPA was formed in the course of a players' meeting at the 2003 World Scrabble Championship in Kuala Lumpur, Malaysia, and formally constituted on 17 November 2005 at its first Biennial General Meeting held in London. BGMs are now held at each World Scrabble Championship (taking place every odd year), and there are currently 24 member organisations. Formation: WESPA was created to represent the interests of international Scrabble competitors and national bodies worldwide. Its main functions are to promote global recognition of Scrabble as a serious competitive activity; to provide for the benefit of members in pursuing the game; to further the best interests of Scrabble and international tournament players; to represent such players in dealings with other bodies, including the trademark owners of the game; to promulgate and encourage international convergence towards common standards and norms (including international rules, word lists and ratings); to organise global competitions and events; to publish relevant material; and to maintain a website for the benefit of the game and its players. Formation: The trademark for Scrabble is owned by Hasbro in North America and Mattel in the rest of the world. Achievements to date have included the preparation of a set of international rules, the publication of a WESPA-endorsed word list which is valid for international tournament play, and the promulgation of international Scrabble ratings. Rules: Following a six-month worldwide consultation process, the first version of WESPA's rules for global Scrabble was released in August 2009. The rules were based on key elements of existing rules that were in force in various national associations, which were synthesised to create global best practice. Following feedback from their use in various international tournaments, extensive reviews took place prior to the issue of version two in November 2010, which was used at the 2011 World Scrabble Championship in Warsaw, Poland, and other international events such as the Causeway Challenge. Version three was released in 2015. Rules: A number of national associations have adopted WESPA rules for domestic use, including the Association of British Scrabble Players. Word list: The word source currently in use for international play, known as Collins Scrabble Words or CSW (formerly Official Scrabble Words or OSW) is not derived from a single dictionary, but combines three components: Collins (7th edition, 2005), Chambers (1998 edition) and TWL, the current Northern American wordlist. TWL (Tournament Word List) is a subset of CSW, but is itself drawn from a range of sources, mostly different editions of Webster's. North American tournaments generally use TWL alone for domestic play, but all tournaments under the auspices of WESPA must use CSW. Word list: The current word list (CSW19) came into force internationally on 1 July 2019, including updates from the most recent editions of original OSW sources as well as eliminating some inconsistencies in previous editions. CSW19 contains over 275,000 words in total (up to the theoretical maximum fifteen letters in length), of which around 119,000 are up to eight letters in length. By contrast, TWL holds approximately two thirds as many words in total. CSW is therefore substantially larger than TWL and has a more international flavour, including a number of local or dialect words from around the world. Ratings: International Scrabble ratings have been maintained since their inception by Bob Jackman in Sydney, Australia, using a variant of the Australian ratings calculation system. All tournaments played to WESPA rules are able to be rated within the system, and regular updates are posted on the WESPA website. The first rated tournament in the system was the 1993 World Scrabble Championship, which was held in New York. Tournaments: WESPA has established the criteria for running international Scrabble tournaments, and it is now mandatory for WESPA rules and the WESPA word list to be used. Tournament organisers are also required to pay a ratings levy on a fee per player basis, in order for the tournament to qualify for international rating. Tournaments: The tournament committee oversees the calendar of international Scrabble events, and has also liaised with Mattel on a number of issues relating to the World Scrabble Championship (WSC), including the number of games to be played and the format of the finals. Furthermore, a country's allocated number of representatives may fluctuate for each WSC depending on the team performance in the previous event, and WESPA has ratified the formula for changes to team allocations. In short, after each event the participating countries are ranked according to the average individual finishing rank of their players; teams falling in the top half may be entitled to gain a player for next time, while teams in the bottom half may lose a player. Tournaments: Following the establishment of Mind Sports International, the title 'World Scrabble Championship' was claimed by that entity for official world titles (sanctioned by Mattel under licence). WESPA has undertaken to host biennial world events from 2015 onwards using the old WSC format, with teams qualifying from national organisations. The WESPA Championship 2015 was hosted in Perth, Australia, the 2017 edition took place in Nairobi, Kenya, and the 2019 edition in Goa, India. Youth Scrabble: The World Youth Scrabble Championships, renamed WESPA Youth Cup for 2017 further to a naming issue with Mindsports Academy, have been organised annually since 2006 under the aegis of WESPA for players under 18 on qualification date. The Youth committee has also been active in promoting the game among young players worldwide, including training workshops held in various locations. Committee: The WESPA committee is made up of nominees representing member nations, and the current Chair is Elie Dangoor (UK). Various subcommittees are charged with matters such as promotions and communications, ratings and tournaments, rules, dictionary and youth. The first Chair of WESPA was Allan Simmons (UK) who led the organisation from inception through to 2008. He was succeeded by Bahrain-based Roy Kietzmann, who died in 2009 prior to the appointment of Elie Dangoor. In 2019, Dangoor resigned and was replaced by Chris Lipe. Other federations: WESPA has committee links with the two other major world Scrabble federations, FISF (Fédération Internationale de Scrabble Francophone) and FISE (Federación Internacional de Scrabble en Español). Mailing list: The world-scrabble mailing list discusses international Scrabble matters and WESPA issues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**McKay–Miller–Širáň graph** McKay–Miller–Širáň graph: In graph theory, the McKay–Miller–Širáň graphs are an infinite class of vertex-transitive graphs with diameter two, and with a large number of vertices relative to their diameter and degree. They are named after Brendan McKay, Mirka Miller, and Jozef Širáň, who first constructed them using voltage graphs in 1998. Background: The context for the construction of these graphs is the degree diameter problem in graph theory, which seeks the largest possible graph for each combination of degree and diameter. For graphs of diameter two, every vertex can be reached in two steps from an arbitrary starting vertex, and if the degree is d then at most d vertices can be reached in one step and another d(d−1) in two steps, giving the Moore bound that the total number of vertices can be at most d2+1 . However, only four graphs are known to reach this bound: a single edge (degree one), a 5-vertex cycle graph (degree two), the Petersen graph (degree three), and the Hoffman–Singleton graph (degree seven). Only one more of these Moore graphs can exist, with degree 57. For all other degrees, the maximum number of vertices in a diameter-two graph must be smaller. Background: Until the construction of the McKay–Miller–Širáň graphs, the only known construction achieved a number of vertices equal to ⌊d+22⌋⋅⌈d+22⌉, using a Cayley graph construction.The McKay–Miller–Širáň graphs, instead, have a number of vertices equal to 89(d+12)2, for infinitely many values of d . The degrees d for which their construction works are the ones for which (2d+1)/3 is a prime power and is congruent to 1 modulo 4. These possible degrees are the numbers 7, 13, 19, 25, 37, 43, 55, 61, 73, 79, 91, ...The first number in this sequence, 7, is the degree of the Hoffman–Singleton graph, and the McKay–Miller–Širáň graph of degree seven is the Hoffman–Singleton graph. The same construction can also be applied to degrees d for which (2d+1)/3 is a prime power but is 0 or −1 mod 4. In these cases, it still produces a graph with the same formulas for its size, diameter, and degree, but these graphs are not in general vertex-transitive.Subsequent to the construction of the McKay–Miller–Širáň graphs, other graphs with an even larger number of vertices, O(d3/2) fewer than the Moore bound, were constructed. However, these cover a significantly more restricted set of degrees than the McKay–Miller–Širáň graphs. Constructions: The original construction of McKay, Miller, and Širáň, used the voltage graph method to construct them as a covering graph of the graph Kq,q∗ , where q=(2d+1)/3 is a prime power and where Kq,q∗ is formed from a complete bipartite graph Kq,q by attaching (q−1)/4 self-loops to each vertex. Constructions: Instead, Šiagiová (2001) again uses the voltage graph method, but applied to a simpler graph, a dipole graph with q parallel edges modified in the same way by attaching the same number of self-loops to each vertex.It is also possible to construct the McKay–Miller–Širáň graphs by modifying the Levi graph of an affine plane over the field of order q Additional properties: The spectrum of a McKay–Miller–Širáň graph has at most five distinct eigenvalues. In some of these graphs, all of these values are integers, so that the graph is an integral graph.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sunset (color)** Sunset (color): The color sunset is a pale tint of pink. It is a representation of the average color of clouds when the sunlight from a sunset is reflected from them. The first recorded use of sunset as a color name in English was in 1916. Variations of sunset: Sunglow The color sunglow is displayed at right. The first recorded use of sunglow as a color name in English was in 1924. The Crayola crayon color was formulated in 1990. Sunray At right is displayed the color sunray. The first recorded use of sunray as a color name in English was in 1926. Sunset orange The color sunset orange is displayed at right. Sunset orange was formulated as a Crayola color in 1997. Sun colors in human culture: Interior Design Sunset is popular color in interior design which is used when a pale warm tint is desired.Sports Sunset orange is used on the NBA's Oklahoma City Thunder alternative jerseys introduced in the 2015-16 season. They are primarily worn on Sunday matchups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propionic anhydride** Propionic anhydride: Propionic anhydride is an organic compound with the formula (CH3CH2CO)2O. This simple acid anhydride is a colourless liquid. It is a widely used reagent in organic synthesis as well as for producing specialty derivatives of cellulose. Synthesis: Industrial route to propionic anhydride involves thermal dehydration of propionic acid, driving off the water by distillation: 2 CH3CH2CO2H → (CH3CH2CO)2O + H2OAnother routes is the Reppe carbonylation of ethylene with propionic acid and nickel carbonyl as the catalyst: CH2=CH2 + CH3CH2CO2H + CO → (CH3CH2CO)2OPropionic anhydride has also been prepared by dehydration of propionic acid using ketene: 2 CH3CH2CO2H + CH2=C=O → (CH3CH2CO)2O + CH3CO2H Safety: Propanoic anhydride is strong smelling and corrosive, and will cause burns on contact with skin. Vapour can burn eyes and lungs. Legal Status: Due to its potential use as a precursor in the synthesis of fentanyl and fentanyl analogs, propanoic anhydride is regulated by the United States Drug Enforcement Administration as a List I chemical under the Controlled Substances Act.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ming-Ming Zhou** Ming-Ming Zhou: Ming-Ming Zhou is an American scientist who focuses on structural and chemical biology, NMR spectroscopy, and drug design. He is the Dr. Harold, Golden Lamport Professor, and Chairman of the Department of Pharmacological Sciences. He is also the Co-Director of the Drug Discovery Institute at the Icahn School of Medicine at Mount Sinai and Mount Sinai Health System in New York City, as well as Professor of Sciences.Zhou has published more than 180 research articles and is an inventor of 28 patents. His research has been funded by grants from federal, state and private research foundations including: the National Institutes of Health, the National Science Foundation, the New York State Stem Cell Science, the Institute for the Study of Aging, the American Foundation for AIDS Research, the American Cancer Society, GlaxoSmithKline, the Michael J. Fox Foundation, the Samuel Waxman Cancer Research Foundation, and the Wellcome Trust. He serves on the board of directors at the New York Structural Biology Center, as well as on the editorial boards of ACS Medicinal Chemistry Letters, the Journal of Molecular Cell Biology and Cancer Research. Zhou received the GlaxoSmithKline Drug Discovery Research Award in 2003 for his work in anti-HIV/AIDS therapy development, and the Jacobi Medallion in 2019. He is also an elected fellow of the American Association for the Advancement of Science (2012). Biography: Zhou earned his B.E. in chemical engineering from the East China University of Science and Technology (Shanghai, PRC) in 1984. He earned his M.S. in chemistry from the Michigan Technological University in 1988 and a Ph.D. in chemistry from Purdue University in Indiana in 1993. He completed his postdoctoral fellowship at Abbott Laboratories in Chicago, Illinois, then joined the faculty of the Mount Sinai Medical School in 1997.Zhou’s research is directed at better understanding of the biology of epigenetic control of gene transcription of human genome to attain both the underlying basic principles and rational design of novel chemical compounds that modulate gene expression in chromatin. His research studies have broad implications in human biology and disease, ranging from cell development, to stem cell self-renewal and differentiation, and re-programming to human cancer and inflammation, as well as neurodegenerative disorders. Among his major contributions to science is the Zhou Lab's seminal discovery of the bromodomain as the acetyl-lysine binding domain ('chromatin reader') in gene transcription (Nature 1999), and their first demonstration of druggability and therapeutic potential of bromodomain proteins in gene transcription to treat a wide array of human diseases including cancer and inflammation. This concept has had transformative impacts in epigenetic drug discoveries in the pharmaceutical industry.The Zhou Lab further discovered the tandem PHD finger of DPF3b as a first alternative to the bromodomain for acetyl-lysine binding (Nature 2010), and the PAZ domain as the RNA binding domain in RNAi (Nature 2003). His work also addresses the role of histone lysine methylation (Nature Cell Biol. 2008) as well as long non-coding RNA in epigenetic control of gene transcription in human stem cell maintenance and differentiation (Mol. Cell 2010).Zhou's work in rational design of chemical probes for mechanism-driven research led to the discovery of the HIV Tat/human co-activator PCAF interaction as a potential novel anti-HIV therapy target. His group has developed chemical probes that modulate the transcriptional activity of human tumor suppressor p53 under stress conditions. His recent work includes the development of a novel gene transcriptional silencing technology. Additional research discoveries include structural mechanisms as well as drug target discovery and validation for human cancers, particularly triple-negative breast cancer (TNBC), and inflammatory disorders such as inflammatory bowel disease (IBD) and multiple sclerosis.Current and past society memberships include The Harvey Society, the Biophysical Society, the American Chemical Society, the American Society for Biochemistry and Molecular Biology, the American Association for the Advancement of Science and the New York Academy of Sciences. He serves on multiple editorial boards and reviews grants for the American Cancer Society, the American Heart Association, the National Institutes of Health and the National Science Foundation. Awards and honors: 2003 GlaxoSmithKlineDrug Discovery and Development Award 2009 Elected Member, The Academy of Sciences & Arts at Michigan Technological University 2019 The Jacobi Medallion, The Mount Sinai Health System
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ciliospinal center** Ciliospinal center: The ciliospinal center (also known as Budge's center) is a cluster of pre-ganglionic sympathetic neuron cell bodies located in the intermediolateral cell column of the spinal cord at the (C8) T1-T2 spinal levels.It receives afferents from (the posterior part of) the hypothalamus via the (ipsilateral) hypothalamospinal tract which synapse with the center's pre-ganglionic sympathetic neurons. The efferent, pre-ganglionic axons then leave the spinal cord to enter and ascend in the sympathetic trunk to reach the superior cervical ganglion (SCG) where they synapse with post-ganglionic sympathetic neurons. The post-ganglionic neurons of the SCG then join the internal carotid nerve plexus of the internal carotid artery, accompanying first this artery and subsequently its branches to reach the orbit. In the orbit, they join the long ciliary nerves and short ciliary nerves to reach and innervate the dilator pupillae muscle to mediate pupillary dilatation as part of the pupillary reflex. History: It is associated with a reflex identified by Augustus Volney Waller and Ludwig Julius Budge in 1852.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pleurotolysin** Pleurotolysin: Pleurotolysin (TC# 1.C.97.1.1), a sphingomyelin-specific cytolysin. Its A (17 kDa; Q8X1M9) and B (59 kDa; Q5W9E8) components are assembled into a transmembrane pore complex. The Pleurotolysin Pore-Forming (Pleurotolysin) Family (TC# 1.C.97) is a family of pore forming proteins belonging to the MACPF superfamily. Function: Proteins with membrane-attack complex/perforin (MACPF) domains have a variety of biological roles, including defense and attack, organismal development, and cell adhesion and signaling. The distribution of these proteins in fungi appears to be restricted to some Pezizomycotina and Basidiomycota species only, in correlation with the aegerolysins (PF06355). These two protein groups coincide in only a few species, and they operate as cytolytic bi-component pore-forming agents. Representative proteins include pleurotolysin B, which has a MACPF domain, the aegerolysin-like protein pleurotolysin A, and the very similar ostreolysin A (TC# 1.C.97.3.2) that has been purified from oyster mushroom (Pleurotus ostreatus). These act in concert to perforate natural and artificial lipid membranes with high cholesterol and sphingomyelin contents. The complex has a 13-meric rosette-like structure with a central lumen that is ~ 4-5 nm in diameter. The opened transmembrane pore is non-selectively permeable to ions and smaller neutral solutes, and is a cause of cytolysis of a colloid-osmotic type. Research: Sakurai et al. 2004 cloned complementary and genomic DNAs encoding pleurotolysin, and studied pore-forming properties of recombinant proteins. Recombinant pleurotolysin A lacking the first methionine was purified as a 17-kDa protein with sphingomyelin-binding activity. The cDNA for pleurotolysin B encoded a precursor consisting of 523 amino acyl residues, of which 48 N-terminal amino acyl residues were absent in natural pleurotolysin B. Mature and precursor forms of pleurotolysin B were expressed as insoluble 59- and 63-kDa proteins, respectively. Although neither recombinant pleurotolysin A nor B alone was hemolytically active at higher concentrations of up to 100 mg/ml, they cooperatively assembled into a membrane pore complex on human erythrocytes and lysed the cell. Homologues: In this TC family, both constituents of pleurotolysin and ostreolysin (A and B) are included under TC#s 1.C.97.1.1 and 1.C.97.1.2, respectively. However, homologues of Pleurotolysin B are found under TC#s 1.C.97.1.3 - 1.C.97.1.9 while homologues of Pleurotolysin A are found under TC#s 1.C.97.2.1 - 1.C.97.2.4 and TC#s 1.C.97.3.1 - 1.C.97.3.8. Pleurotolysins A are not homologous to Pleurotolysins B. While some homologues depend on the presence of both constituents for pore formation, as noted for both pleurotolysin and ostreolysin, some homologues of both A and B can form pores without the other. While Pleurotolysin B is in the MACPF superfamily (TC# 1.C.39) while Pleurotolysin A is in the Aegerolysin superfamily. Homologues: Erylysin Another two-component hemolysin, erylysin A and B (EryA and EryB; TC# 1.C.97.1.2), was isolated from an edible mushroom, Pleurotus eryngii. Hemolytic activity was exhibited only by the EryA and EryB mixture. Aegerolysin: While Pleurotolysin B is in the MACPF superfamily (TC# 1.C.39), Pleurotolysin A is in the Aegerolysin superfamily. Several members of the Aegerolysin family have been used as tools to detect and visualize ceramide phosphoethanolamine, a major sphingolipid in invertebrates but not in animals. It may be distantly related to members of the Equinatoxin Family (TC# 1.C.38). Aegerolysin: The aegerolysin family consists of several bacterial and eukaryotic aegerolysin-like proteins. It has been found that aegerolysin and ostreolysin are expressed during formation of primordia and fruiting bodies and possibly play a role in the initial phase of fungal fruiting. The bacterial members of this family are expressed during sporulation. Ostreolysin is cytolytic to various erythrocytes and tumor cells because of pore formation. Several members of the Aegerolysin family have been used as tools to detect and visualize ceramide phosphoethanolamine, a major sphingolipid in invertebrates but not in animals. It may be distantly related to members of the Equinatoxin Family (TC# 1.C.38).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dasgupta's objective** Dasgupta's objective: In the study of hierarchical clustering, Dasgupta's objective is a measure of the quality of a clustering, defined from a similarity measure on the elements to be clustered. It is named after Sanjoy Dasgupta, who formulated it in 2016. Its key property is that, when the similarity comes from an ultrametric space, the optimal clustering for this quality measure follows the underlying structure of the ultrametric space. In this sense, clustering methods that produce good clusterings for this objective can be expected to approximate the ground truth underlying the given similarity measure.In Dasgupta's formulation, the input to a clustering problem consists of similarity scores between certain pairs of elements, represented as an undirected graph G=(V,E) , with the elements as its vertices and with non-negative real weights on its edges. Large weights indicate elements that should be considered more similar to each other, while small weights or missing edges indicate pairs of elements that are not similar. A hierarchical clustering can be described as a tree (not necessarily a binary tree) whose leaves are the elements to be clustered; the clusters are then the subsets of elements descending from each tree node, and the size |C| of any cluster C is its number of elements. For each edge uv of the input graph, let w(uv) denote the weight of edge uv and let C(uv) denote the smallest cluster of a given clustering that contains both u and v . Then Dasgupta defines the cost of a clustering to be ∑uv∈Ew(uv)⋅|C(uv)|. Dasgupta's objective: The optimal clustering for this objective is NP-hard to find. However, it is possible to find a clustering that approximates the minimum value of the objective in polynomial time by a divisive (top-down) clustering algorithm that repeatedly subdivides the elements using an approximation algorithm for the sparsest cut problem, the problem of finding a partition that minimizes the ratio of the total weight of cut edges to the total number of cut pairs. Dasgupta's objective: Equivalently, for purposes of approximation, one may minimize the ratio of the total weight of cut edges to the number of elements on the smaller side of the cut. Using the best known approximation for the sparsest cut problem, the approximation ratio of this approach is log ⁡n)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PIP4K2B** PIP4K2B: Phosphatidylinositol-5-phosphate 4-kinase type-2 beta is an enzyme that in humans is encoded by the PIP4K2B gene.The protein encoded by this gene catalyzes the phosphorylation of phosphatidylinositol 4-phosphate on the fifth hydroxyl of the myo-inositol ring to form phosphatidylinositol 4,5-bisphosphate. This gene is a member of the phosphatidylinositol-4-phosphate 5-kinase family. The encoded protein sequence does not show similarity to other kinases, but the protein does exhibit kinase activity. Additionally, the encoded protein interacts with p55 TNF receptor. Interactions: PIP4K2B has been shown to interact with TNFRSF1A. In addition, PIP4K2B has been shown to interact with PIP4K2A and may modulate the cellular localisation of PIP4K2A. Structure: The structure of PIP4K2B has been determined through X-ray crystallography.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nummular keratitis** Nummular keratitis: Nummular keratitis is a feature of viral keratoconjunctivitis. It is a common feature of adenoviral keratoconjunctivitis (an ocular adenovirus infection), as well as approximately 1/3rd of cases of Herpes Zoster Ophthalmicus infections. It represents the presence of anterior stromal infiltrates. Unilateral or bilateral subepithelial lesions of the cornea may be present. Slit lamp examination reveals multiple tiny granular deposits surrounded by a halo of stromal haze. After healing, residual 'nummular scars' often remain. Disciform keratitis occurs in 50% of individuals with Nummular keratitis, but Nummular keratitis always precedes Disciform keratitis. Treatment: Topical NSAIDS Lubricating eye drops Topical dilute steroid drops in tapering doses (debatable, see: Adenoviral keratoconjunctivitis)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Karyorrhexis** Karyorrhexis: Karyorrhexis (from Greek κάρυον karyon 'kernel, seed, nucleus' and ῥῆξις rhexis 'bursting') is the destructive fragmentation of the nucleus of a dying cell whereby its chromatin is distributed irregularly throughout the cytoplasm. It is usually preceded by pyknosis and can occur as a result of either programmed cell death (apoptosis), cellular senescence, or necrosis. In apoptosis, the cleavage of DNA is done by Ca2+ and Mg2+ -dependent endonucleases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Two-photon photoelectron spectroscopy** Two-photon photoelectron spectroscopy: Time-resolved two-photon photoelectron (2PPE) spectroscopy is a time-resolved spectroscopy technique which is used to study electronic structure and electronic excitations at surfaces. The technique utilizes femtosecond to picosecond laser pulses in order to first photoexcite an electron. After a time delay, the excited electron is photoemitted into a free electron state by a second pulse. The kinetic energy and the emission angle of the photoelectron are measured in an electron energy analyzer. To facilitate investigations on the population and relaxation pathways of the excitation, this measurement is performed at different time delays. Two-photon photoelectron spectroscopy: This technique has been used for many different types of materials to study a variety of exotic electron behaviors, including image potential states at metal surfaces, and electron dynamics at molecular interfaces. Basic physics: The final kinetic energy of the electron can be modeled by pump probe kin −Φ where the EB is the binding energy of the initial state, Ekin is the kinetic energy of the photoemitted electron, Φ is the work function of the material in question, and Epump, Eprobe are the photon energies of the laser pulses, respectively. Without a time delay, this equation is exact. However, as the delay between the pump and probe pulses increases, the excited electron may relax in an energy. Hence the energy of the photoemitted electron is lowered. With large enough time delay between the two pulses, the electron will relax all the way back to its original state. The timescales at which the electronic relaxation occurs, as well as the relaxation mechanism (either via vibronic coupling or electronic coupling) is of interest for applications of functional devices such as solar cells and light-emitting diodes. Experimental configuration: Time-resolved two-photon photoelectron spectroscopy usually employs a combination of ultrafast optical technology as well as ultrahigh vacuum components. The main optical component is an ultrafast (femtosecond) laser system which generates pulses in the near infrared. Nonlinear optics are used to generate photon energies in the visible and ultraviolet spectral range. Typically, ultraviolet radiation is required to photoemit electrons. In order to allow for time-resolved experiments, a fine adjustment delay stage must be employed in order to manipulate the time delay between the pump and the probe pulse.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IPv4 shared address space** IPv4 shared address space: In order to ensure proper working of carrier-grade NAT (CGN), and, by doing so, alleviating the demand for the last remaining IPv4 addresses, a /10 size IPv4 address block was assigned by Internet Assigned Numbers Authority (IANA) to be used as shared address space. This block of addresses is specifically meant to be used by Internet service providers (or ISPs) that implement carrier-grade NAT, to connect their customer-premises equipment (CPE) to their core routers. Instead of using unique addresses from the rapidly depleting pool of available globally unique IPv4 addresses, ISPs use addresses in 100.64.0.0/10 for this purpose. Because the network between CPEs and the ISP's routers is private to each ISP, all ISPs may share this block of addresses. Background: If an ISP deploys a CGN and uses private Internet address space (networks 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to connect their customers, there is a risk that customer equipment using an internal network in the same range will stop working. The reason is that routing will not work if the same address ranges are used on both the private and public sides of a customer’s network address translation (NAT) equipment. Normal packet flow can therefore be disrupted and the customer effectively cut off the Internet, unless the customer chooses another private address range that does not conflict with the range selected by their ISP. Background: This prompted some ISPs to develop policy within American Registry for Internet Numbers (ARIN) to allocate new private address space for CGNs. ARIN, however, deferred to the Internet Engineering Task Force (IETF) before implementing the policy, indicating that the matter was not typical allocation but a reservation for technical purposes.In 2012, the IETF defined a Shared Address Space for use in ISP CGN deployments and NAT devices that can handle the same addresses occurring both on inbound and outbound interfaces. ARIN returned space to the IANA as needed for this allocation and "The allocated address block is 100.64.0.0/10". Transition to IPv6: The use of shared address space is one of the various methods to allow transition from IPv4 to IPv6. Transition to IPv6: Its main purpose was to postpone the depletion of IPv4 addresses, by allowing ISPs to introduce a second layer of NATting. A common practice is to give CPEs a unique IPv4 address on their Internet-facing interface and use NAT to hide all addresses on the home LAN. Since the pool of available public IPv4 addresses is depleted, it is no longer possible for most ISPs to assign unique IPv4 addresses to CPEs, because there are none left to them to acquire. Instead, an address in the 100.64.0.0/10 range is assigned on the CPE's Internet-facing interface, and this address is translated again to one of the public IPv4 addresses of the ISP's core routers. Using shared address space allows ISPs to continue to use IPv4 as they were used to. Transition to IPv6: This scheme hides a large number of IP addresses behind a small set of public addresses, the same way the CPE does this locally, slowing down the rate IPv4 addresses are depleted. The shared address space contains 222 or 4194304 addresses, so each ISP is able to connect over 4 million subscribers this way. Other occurrences: In BIND, empty reverse mapping zones for 100.64.0.0/16 till 100.127.0.0/16 (64 zones in total) are automatically created in the 'internal' view, if not configured otherwise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parietal bone** Parietal bone: The parietal bones () are two bones in the skull which, when joined at a fibrous joint, form the sides and roof of the cranium. In humans, each bone is roughly quadrilateral in form, and has two surfaces, four borders, and four angles. It is named from the Latin paries (-ietis), wall. Surfaces: External The external surface [Fig. 1] is convex, smooth, and marked near the center by an eminence, the parietal eminence (tuber parietale), which indicates the point where ossification commenced. Crossing the middle of the bone in an arched direction are two curved lines, the superior and inferior temporal lines; the former gives attachment to the temporal fascia, and the latter indicates the upper limit of the muscular origin of the temporal muscle. Above these lines the bone is covered by a tough layer of fibrous tissue – the epicranial aponeurosis; below them it forms part of the temporal fossa, and affords attachment to the temporal muscle. At the back part and close to the upper or sagittal border is the parietal foramen which transmits a vein to the superior sagittal sinus, and sometimes a small branch of the occipital artery; it is not constantly present, and its size varies considerably. Internal The internal surface [Fig. 2] is concave; it presents depressions corresponding to the cerebral convolutions, and numerous furrows (grooves) for the ramifications of the middle meningeal artery; the latter run upward and backward from the sphenoidal angle, and from the central and posterior part of the squamous border. Along the upper margin is a shallow groove, which, together with that on the opposite parietal, forms a channel, the sagittal sulcus, for the superior sagittal sinus; the edges of the sulcus afford attachment to the falx cerebri. Near the groove are several depressions, best marked in the skulls of old persons, for the arachnoid granulations (Pacchionian bodies). In the groove is the internal opening of the parietal foramen when that aperture exists. Borders: The sagittal border, the longest and thickest, is dentated (has toothlike projections) and articulates with its fellow of the opposite side, forming the sagittal suture. The frontal border is deeply serrated, and bevelled at the expense of the outer surface above and of the inner below; it articulates with the frontal bone, forming half of the coronal suture. The point where the coronal suture intersects with the sagittal suture forms a T-shape and is called the bregma. Borders: The squamous border is divided into three parts: of these: the anterior is thin and pointed, bevelled at the expense of the outer surface, and overlapped by the tip of the great wing of the sphenoid; the middle portion is arched, bevelled at the expense of the outer surface, and overlapped by the squama of the temporal; the posterior part is thick and serrated for articulation with the mastoid portion of the temporal. Borders: The occipital border, deeply denticulated (finely toothed), articulates with the occipital bone, forming half of the lambdoid suture. That point where the sagittal suture intersects the lambdoid suture is called the lambda, because of its resemblance to the Greek letter. Angles: The frontal angle is practically a right angle, and corresponds with the point of meeting of the sagittal and coronal sutures; this point is named the bregma; in the fetal skull and for about a year and a half after birth this region is membranous, and is called the anterior fontanelle. The sphenoidal angle, thin and acute, is received into the interval between the frontal bone and the great wing of the sphenoid. Its inner surface is marked by a deep groove, sometimes a canal, for the anterior divisions of the middle meningeal artery. The occipital angle is rounded and corresponds with the point of meeting of the sagittal and lambdoidal sutures—a point which is termed the lambda; in the fetus this part of the skull is membranous, and is called the posterior fontanelle. The mastoid angle is truncated; it articulates with the occipital bone and with the mastoid portion of the temporal, and presents on its inner surface a broad, shallow groove which lodges part of the transverse sinus. The point of meeting of this angle with the occipital and the mastoid part of the temporal is named the asterion. Ossification: The parietal bone is ossified in membrane from a single center, which appears at the parietal eminence about the eighth week of fetal life. Ossification gradually extends in a radial manner from the center toward the margins of the bone; the angles are consequently the parts last formed, and it is here that the fontanelles exist. Occasionally the parietal bone is divided into two parts, upper and lower, by an antero-posterior suture. In other animals: In non-human vertebrates, the parietal bones typically form the rear or central part of the skull roof, lying behind the frontal bones. In many non-mammalian tetrapods, they are bordered to the rear by a pair of postparietal bones that may be solely in the roof of the skull, or slope downwards to contribute to the back of the skull, depending on the species. In the living tuatara, and many fossil species, a small opening, the parietal foramen, lies between the two parietal bones. This opening is the location of a third eye in the midline of the skull, which is much smaller than the two main eyes. In other animals: In dinosaurs The parietal bone is usually present in the posterior end of the skull and is near the midline. This bone is part of the skull roof, which is a set of bones that cover the brain, eyes and nostrils. The parietal bones make contact with several other bones in the skull. The anterior part of the bone articulates with the frontal bone and the postorbital bone. The posterior part of the bone articulates with the squamosal bone, and less commonly the supraoccipital bone. The bone-supported neck frills of ceratopsians were formed by extensions of the parietal bone. These frills, which overhang the neck and extend past the rest of the skull is a diagnostic trait of ceratopsians. The recognizable skull domes present in pachycephalosaurs were formed by the fusion of the frontal and parietal bones and the addition of thick deposits of bone to that unit.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital Quran** Digital Quran: The digital Quran is the text of the Qur'an processed or distributed as an electronic text, or more specifically to an electronic device dedicated to displaying the text of the Qur'an and playing digital recordings of Qur'an readings. History: Qur'anic software on CD-ROM has been developed since the early 1990s. Online texts began to be hosted by Islamic websites from the 2000s. Such a device has first been marketed in Indonesia beginning in. These devices were capable of audio playback of recorded recitations of the Qur'an with synchronized on-screen Arabic text. It allowed basic navigation of the Quran with the ability for the user to select a specific surah (chapter) and ayah (verse). Translations of the Quran to other languages were also included, sometimes synchronized with the original Arabic recitations. The products were mass-produced in China at an affordable price; however this was achieved at the sacrifice of expenditure on research and development. Subsequent models introduced color screens. Since the availability of more powerful mobile devices such as smartphones, focus has shifted on the production of Quranic software for such devices rather than dedicated "digital Quran" devices. Usage: There is debate surrounding to what degree a digital form of the Qur'an should be treated like a hard copy in terms of etiquette when reciting from it. For example, should the practices of wudu, qibla, or brushing one's teeth with a miswak be observed while reading from a digital Qur'an.Commenters speculated about how the special barakah or contagion heuristic associated with the Qur'an translates to electronic texts. Other observers noted that this way of thinking is foreign to the devices users, who adopt western digital technology unthinkingly. Myrvold (2010) summarizes the debate on how Qur'anic etexts and the devices holding them should be handled, citing a fatwa issued by the "Ask Imam" website to the effect that ritual purity should only be regarded in connection with such a device during the time Qur'anic text is actually being displayed. Mohammed Zakariah has come to the conclusion that it is because of the digital Qur'an that Islam has been able to spread and diversify among cultures. This has led to the expansion of Islam among people of faith, scholars who are now able to study the book and scientists that see new opportunity arise. Thomas Hoffmann also discusses in his book on new information and technologies that it is because of these new and creative ways of simplifying the Qur'an that the world sees a new wave of "lay" users rather than experts and self proclaimed experts in the field. Another Journal by Engku Alwi went to college campuses to see how the new form of technology, phone and tablet applications were seen by over 200 muslim students. This concluded that it was well received and even had good effects on recitation and memorization but that a large percentage were worried or confused o the rules of recitation when using the device. As a digital Muṣḥaf: A digital Qur'ān serves as a digital Muṣḥaf, and faces unique challenges because of it. The critical challenges to produce a flawless digital Muṣḥaf are correct encoding, correct computer typography, and facsimile rendering on all browsers, operating systems and devices. As a digital Muṣḥaf: 1. Correct encoding is hampered by constraints imposed by the Unicode Standard. For instance, only recently the extra characters were encoded to represent the so-called open tanwīn. Correct encoding is also hampered by the fact that input methods, i.e., keyboard layouts for Arabic, are based on modern everyday orthography, which differs from Qur'ān orthography in many respects: there are more characters used in the Qur'ān and some characters are different in terms of Unicode, such as yā' with or without dots in final position. As a digital Muṣḥaf: 2. Correct computer typography is hampered by mechanisms that are lacking because the industry is not aware that they are needed. In particular the category of "amphibious characters", characters that can occur as both main letters and as diacritics depending on context, cannot be handled by conventional font layout engines. Last but not least, correct computer typography should reproduce Islamic script as accurately as possible. Unfortunately, Arabic typography has a bias to adapt or reduce itself to constraints of Western technology that was not designed to handle Arabic at all. This circumstance adds an obvious complication to the task of producing a flawless digital Qur'ān. As a digital Muṣḥaf: 3. Facsimile rendering on all devices is de facto impossible with conventional computer typography, because it depends on proprietary operating systems, proprietary font layout engines and often inaccurate and incomplete Arabic typefaces. The first digital Muṣḥaf that takes all these considerations into account is the Omani digital Muṣḥaf: www.mushafmuscat.om, which is described in this webcast by the Bibliotheca Alexandrina in Egypt. Recitation: Since the first digital versions of the Qur'an, more digital resources relating to the Qur'an and Islam have emerged as well. One can find videos containing Tajweed (recitation) on sites to equip them in learning recitation. This can be helpful because both beginner and professional resources can be found and used as tools in learning the practice of Tajweed. If the digital content and context of what these followers are using is trustworthy, then listening to Tajweed online can help to provide "spiritual merit" to them. The art of Tajweed is very important in Muslim culture and if followers choose to use these online resources to explore this art, then it can enhance their prayer lives. Through the use of digital resources, a follower of Islam is given the ability to learn even if their life circumstances (work, location, health, etc.) limit them from being able to physically go somewhere to learn about the Qur'an and Islam. Issues: An issue emerging alongside the growing usage of digital copies of the Qur'an is confirming the authenticity of digital copies. Given that the Qur'an has been maintained in its original, unedited state for fourteen centuries, maintaining this originality against tampering is of the utmost importance for digital Qur'anic content. While hard copies of the Qur'an are meticulously examined to assure accuracy before they are made available for sale, many digital copies that are available for free on the internet are not subjected to same degree of scrutiny. Among online copies of the Qur'an, inaccuracies and tampering that have been found went gone largely unnoticed by readers of the website. Because of this, there are many proposed methods to rectify the issue of authenticity and establish a method to verify the integrity of digital Qur'anic content. One controversial method of verifying and displaying that a piece of digital Qur'anic content is authentic is the usage of digital watermarks on verified digital images of the Qur'an, which some argue is a form of modifying the Qur'an as well. Other proposed methods of ensuring authenticity include cryptography, steganography, and usage of digital signatures. Digital copies of the Qur'an can be found in many different styles of Arabic and in each style the diacritics (symbols or punctuation in Arabic writing) differ. Diacritics being misplaced or altered does not affect everyone's ability to get the correct meaning out of this text, but it does affect non-Arabic speakers' ability. One method used to try to prevent the meaning of the Qur'an from being misconstrued is the use of the Qur'an Quote Algorithm. This algorithm allows people to take a verse and search the true meaning of it by out the diacritics which could be interpreted incorrectly by non-Arabic speakers and evaluates just the words.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calcium imaging** Calcium imaging: Calcium imaging is a microscopy technique to optically measure the calcium (Ca2+) status of an isolated cell, tissue or medium. Calcium imaging takes advantage of calcium indicators, fluorescent molecules that respond to the binding of Ca2+ ions by fluorescence properties. Two main classes of calcium indicators exist: chemical indicators and genetically encoded calcium indicators (GECI). This technique has allowed studies of calcium signalling in a wide variety of cell types. In neurons, electrical activity is always accompanied by an influx of Ca2+ ions. Thus, calcium imaging can be used to monitor the electrical activity in hundreds of neurons in cell culture or in living animals, which has made it possible to dissect the function of neuronal circuits. Chemical indicators: Chemical indicators are small molecules that can chelate calcium ions. All these molecules are based on an EGTA homologue called BAPTA, with high selectivity for calcium (Ca2+) ions versus magnesium (Mg2+) ions. This group of indicators includes fura-2, indo-1, fluo-3, fluo-4, Calcium Green-1. Chemical indicators: These dyes are often used with the chelator carboxyl groups masked as acetoxymethyl esters, in order to render the molecule lipophilic and to allow easy entrance into the cell. Once this form of the indicator is in the cell, cellular esterases will free the carboxyl groups and the indicator will be able to bind calcium. The free acid form of the dyes (i.e. without the acetoxymethyl ester modification) can also be directly injected into cells via a microelectrode or micropipette which removes uncertainties as to the cellular compartment holding the dye (the acetoxymethyl ester can also enter the endoplasmic reticulum and mitochondria). Binding of a Ca2+ ion to a fluorescent indicator molecule leads to either an increase in quantum yield of fluorescence or emission/excitation wavelength shift. Individual chemical Ca2+ fluorescent indicators are utilized for cytosolic calcium measurements in a wide variety of cellular preparations. The first real time (video rate) Ca2+ imaging was carried out in 1986 in cardiac cells using intensified video cameras. Later development of the technique using laser scanning confocal microscopes revealed sub-cellular Ca2+ signals in the form of Ca2+ sparks and Ca2+ blips. Relative responses from a combination of chemical Ca2+ fluorescent indicators were also used to quantify calcium transients in intracellular organelles such as mitochondria.Calcium imaging, also referred to as calcium mapping, is also used to perform research on myocardial tissue. Calcium mapping is a ubiquitous technique used on whole, isolated hearts such as mouse, rat, and rabbit species. Genetically encoded calcium indicators: Genetically encoded calcium indicators (GECIs) are powerful tools useful for in vivo imaging of cellular, developmental, and physiological processes. GECIs do not need to be acutely loaded into cells; instead the genes encoding for these proteins can be introduced into individual cells or cell lines by various transfection methods. It is also possible to create transgenic animals expressing the indicator in all cells or selectively in certain cellular subtypes. GECIs are used to study neurons, T-cells, cardiomyocytes, and other cell types. Some GECIs report calcium by direct emission of photons (luminescence), but most rely on fluorescent proteins as reporters, including the green fluorescent protein GFP and its variants (eGFP, YFP, CFP). Genetically encoded calcium indicators: Of the fluorescent reporters, calcium indicator systems can be classified into single fluorescent protein (FP) systems, and paired fluorescent protein systems. Camgaroos were one of the first developed variants involving a single protein system. Camgaroos take advantage of calmodulin (CaM), a calcium binding protein. In these structures, CaM is inserted in the middle of yellow fluorescent protein (YFP) at Y145. Previous mutagenesis studies revealed that mutations at this position conferred pH stability while maintaining fluorescent properties, making Y145 an insertion point of interest. Additionally, the N and C termini of YFP are linked by a peptide linker (GGTGGS). When CaM binds to Ca2+, the effective pKa is lowered, allowing for chromophore deprotonation. This results in increased fluorescence upon calcium binding in an intensiometric fashion. Such detection is in contrast with ratiometric systems, in which there is a change in the absorbance/emission spectra as a result of Ca2+ binding. A later developed single-FP system, dubbed G-CaMP, also invokes circularly permuted GFP. One of the termini is fused with CaM, and the other termini is fused with M13 (the calmodulin binding domain of myosin light kinase) The protein is designed such that the termini are close in space, allowing for Ca2+ binding to cause conformational changes and chromophore modulation, allowing for increased fluorescence. G-CaMP and its refined variants have nanomolar binding affinities. A final single protein variant is the CatchER, which is generally considered to be a lower affinity indicator. Its calcium binding pocket is quite negative; binding of the cation helps to shield the large concentration of negative charge and allows for recovered fluorescence.In contrast to these systems are paired fluorescent protein systems, which include the prototypical Cameleons. Cameleons consist of two different fluorescent proteins, CaM, M13, and a glycylglycine linker. In the absence of Ca2+, only the donor blue-shifted fluorescent protein will be fluorescent. However, a conformational change caused by calcium binding repositions the red-shifted fluorescent protein, allowing for FRET (Förster resonance energy transfer) to take place. Cameleon indicators produce a ratiometric signal (i.e. the measured FRET efficiency depends on the calcium concentration). Original variants of cameleons were originally more sensitive to Ca2+ and were acid quenched. Such shortcomings were abrogated by Q69K and V68L mutations. Both of these residues were close to the buried anionic chromophore and these mutations probably hinder protonation, conferring greater pH resistance. Genetically encoded calcium indicators: Of growing importance in calcium detection are near-IR (NIR) GECIs, which may open up avenues for multiplexing different indicator systems and allowing deeper tissue penetration. NIRs rely on biliverdin-binding fluorescent proteins, which are largely derived from bacterial phytochromes. NIR systems are similar to inverse pericams in that both experience a decrease in fluorescence upon Ca2+ binding. RCaMPs and RGECOs are functional at 700+ nm, but are quite dim. A Cameleon analog involving NIR FRET has been successfully constructed as well.A special class of GECIs are designed to form a permanent fluorescent tag in active neurons. They are based on the photoswitchable protein Eos which turns from green to red through photocatalyzed (with violet light) backbone cleavage. Combined with the CaM, violet light photoconverts only neurons that have elevated calcium levels. SynTagMA is a synapse-targeted version of CaMPARI2.While fluorescent systems are widely used, bioluminescent Ca2+ reporters may also hold potential because of their ability to abrogate autofluorescence, photobleaching [no excitation wavelength is needed], biological degradation and toxicity, in addition to higher signal-to-noise ratios. Such systems may rely on aequorin and the luciferin coelenterazine. Ca2+ binding causes a conformational change that facilitates coelenterazine oxidation. The resultant photoproduct emits blue light as it returns to the ground state. Colocalization of aequorin with GFP facilitates BRET/CRET (Bioluminescence or Chemiluminescence Resonance Energy Transfer), resulting in a 19 - 65 brightness increase. Such structures can be used to probe millimolar to nanomolar calcium concentrations. A similar system invokes obelin and its luciferin coelenteramide, which may possess faster calcium response time and Mg2+ insensitivity than its aqueorin counterpart. Such systems can also leverage the self-assembly of luciferase components. In a system dubbed “nano-lantern,” the luciferase RLuc8 is split and placed on different ends of CaM. Calcium binding brings the RLuc8 components in close proximity, reforming luciferase, and allowing it to transfer to an acceptor fluorescent protein. To minimize damage to the visualized cells, two-photon microscopy is often invoked to detect the fluorescence from the reporters. The use of near-IR wavelengths and minimization of axial spread of the point function allows for nanometer resolution and deep penetration into the tissue. The dynamic range is often determined from such measurements. For non-ratiometric indicators (typically single protein indicators), it is the ratio of the fluorescence intensities obtained under Ca2+ saturated and depleted conditions, respectively. However, for ratiometric indicators, the dynamic range is the ratio of the maximum FRET efficiency ratio (calcium saturated) to the minimum FRET efficiency ratio (calcium depleted). Yet another common quantity used to measure signals produced by calcium concentration fluxes is the signal-to-baseline ratio (SBR), which is simply the ratio of the change in fluorescence (F - F0) over the baseline fluorescence. This can be related to the SNR (signal to noise ratio) by multiplying the SBR by the square root of the number of counted photons. Genetically encoded calcium indicators: A special class of genetically encoded calcium indicators are designed to form a permanent fluorescent tag in active neurons. They are based on the photoswitchable protein mEos which turns from green to red when illuminated with violet light. Combined with the calcium sensor calmodulin, violet light photoconverts only neurons that have elevated calcium levels. SynTagMA is a synapse-targeted version of CaMPARI2. Usage: Regardless of the type of indicator used the imaging procedure is generally very similar. Cells loaded with an indicator, or expressing it in the case of a GECI, can be viewed using a fluorescence microscope and captured by a Scientific CMOS (sCMOS) camera or CCD camera. Confocal and two-photon microscopes provide optical sectioning ability so that calcium signals can be resolved in microdomains such as dendritic spines or synaptic boutons, even in thick samples such as mammalian brains. Images are analyzed by measuring fluorescence intensity changes for a single wavelength or two wavelengths expressed as a ratio (ratiometric indicators). If necessary, the derived fluorescence intensities and ratios may be plotted against calibrated values for known Ca2+ levels to measure absolute Ca2+ concentrations. Light field microscopy methods extend functional readout of neural activity capabilities in 3D volumes. Usage: Methods such as fiber photometry, miniscopes and two-photon microscopy offer calcium imaging in freely behaving and head-fixed animal models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Weston cell** Weston cell: The Weston cell or Weston standard cell is a wet-chemical cell that produces a highly stable voltage suitable as a laboratory standard for calibration of voltmeters. Invented by Edward Weston in 1893, it was adopted as the International Standard for EMF from 1911 until superseded by the Josephson voltage standard in 1990. Chemistry: The anode is an amalgam of cadmium with mercury with a cathode of pure mercury over which a paste of mercurous sulfate and mercury is placed. The electrolyte is a saturated solution of cadmium sulfate, and the depolarizer is a paste of mercurous sulfate. As shown in the illustration, the cell is set up in an H-shaped glass vessel with the cadmium amalgam in one leg and the pure mercury in the other. Electrical connections to the cadmium amalgam and the mercury are made by platinum wires fused through the lower ends of the legs. Anode reaction Cd(s) → Cd2+(aq) + 2e−Cathode reaction (Hg+)2SO2−4(s) + 2e− → 2Hg(l) + SO2−4(aq)Reference cells must be applied in such a way that no current is drawn from them. Characteristics: The original design was a saturated cadmium cell producing a 1.018638 V reference and had the advantage of having a lower temperature coefficient than the previously used Clark cell.One of the great advantages of the Weston normal cell is its small change of electromotive force with change of temperature. At any temperature t between 0 °C and 40 °C, Et/V = E20/V − 0.0000406 (t/°C − 20) − 0.00000095 (t/°C − 20)2 + 0.00000001 (t/°C − 20)3.This temperature formula was adopted by the London conference of 1908The temperature coefficient can be reduced by shifting to an unsaturated design, the predominant type today. However, an unsaturated cell's output decreases by some 80 microvolts per year, which is compensated by periodic calibration against a saturated cell. Literature: Practical Electricity by W. E. Ayrton and T. Mather, published by Cassell and Company, London, 1911, pp 198–203 U.S. Patent 494,827, "Voltaic cell" Standard Cells, Their Construction, Maintenance, and Characteristics by Walter J. Hamer, National Bureau of Standards Monograph 84, January 15, 1965.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hereditary lobular breast cancer** Hereditary lobular breast cancer: Hereditary lobular breast cancer is a rare inherited cancer predisposition associated with pathogenic CDH1 (gene) germline mutations, and without apparent correlation with the hereditary diffuse gastric cancer syndrome. Research studies identified novel CDH1 germline variants in women with diagnosed lobular breast cancer (in invasive and/or in situ histotype) and without any family history of gastric carcinoma. Firstly, in 2018 Giovanni Corso et al. defined this syndrome as a new cancer predisposition and the Authors suggested additional clinical criteria to testing CDH1 in lobular breast cancer patients. Hereditary lobular breast cancer: In 2020, the International Gastric Cancer Linkage Consortium recognized officially that the hereditary lobular breast cancer is a novel and independent syndrome. To date, there are reported about 40 families clustering for lobular breast cancer and associated with CDH1 germline mutations but without association with diffuse gastric cancer (unpublished data). CDH1 inactivation in lobular breast cancer: In a CDH1 (gene) wild-type situation, lobules are well-organized structures characterized by the cell-cell adhesion mediated through the homophilic binding of E-cadherin molecules on adjacent cells. In case of a CDH1 (gene) mutation the E-cadherin function can be deregulated, with a decreased cell-cell adhesion and increased cell proliferation, so-called lobular hyperplasia. Subsequently, in case of a second-hit CDH1 (gene) inactivation, E-cadherin protein expression is undetectable and, consequently, it disrupts the organization of the lobule. During this pathway, abnormal cells emerge and accumulate in the lobules giving rise to lobular intraepithelial neoplasia. Finally, cancer cells disrupt the basement membrane and invade surrounding breast tissues, a stage that is classified as invasive lobular carcinoma. Criteria for genetic screening: Clinical criteria for genetic testing were suggested as following: (a) bilateral lobular breast cancer with or without family history of breast cancer, with age at onset <50 years; and (b) unilateral lobular breast cancer with family history of breast cancer, with age at onset <45 years. In this context, it has been estimated that the frequency of E-cadherin germline mutation is a rare event, affecting about 3% of the screened population. However, there are ongoing studies to assess the penetrance and the cancer risk in the hereditary lobular breast cancer syndrome. Measures for risk management: Actions to minimize the risk are prophylactic bilateral mastectomy, flat closure without reconstruction or six-month breast surveillance. In case of important family history for breast cancer with CDH1 (gene) germline mutations, prophylactic bilateral mastectomy with or without breast reconstruction is recommended after a careful genetic counseling. In general, as well in hereditary lobular breast cancer associated with CDH1 (gene) mutations, in absence of family history for gastric cancer, prophylactic gastrectomy is not indicated; therefore, yearly endoscopic surveillance should be purposed. In case of breast surveillance only, annual breast magnetic resonance imaging followed by mammography and ultrasound at six months interval, are recommended. Chemoprevention with low-dose Tamoxifen is also considered.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stream cipher** Stream cipher: A stream cipher is a symmetric key cipher where plaintext digits are combined with a pseudorandom cipher digit stream (keystream). In a stream cipher, each plaintext digit is encrypted one at a time with the corresponding digit of the keystream, to give a digit of the ciphertext stream. Since encryption of each digit is dependent on the current state of the cipher, it is also known as state cipher. In practice, a digit is typically a bit and the combining operation is an exclusive-or (XOR). Stream cipher: The pseudorandom keystream is typically generated serially from a random seed value using digital shift registers. The seed value serves as the cryptographic key for decrypting the ciphertext stream. Stream ciphers represent a different approach to symmetric encryption from block ciphers. Block ciphers operate on large blocks of digits with a fixed, unvarying transformation. This distinction is not always clear-cut: in some modes of operation, a block cipher primitive is used in such a way that it acts effectively as a stream cipher. Stream ciphers typically execute at a higher speed than block ciphers and have lower hardware complexity. However, stream ciphers can be susceptible to security breaches (see stream cipher attacks); for example, when the same starting state (seed) is used twice. Loose inspiration from the one-time pad: Stream ciphers can be viewed as approximating the action of a proven unbreakable cipher, the one-time pad (OTP). A one-time pad uses a keystream of completely random digits. The keystream is combined with the plaintext digits one at a time to form the ciphertext. This system was proved to be secure by Claude E. Shannon in 1949. However, the keystream must be generated completely at random with at least the same length as the plaintext and cannot be used more than once. This makes the system cumbersome to implement in many practical applications, and as a result the one-time pad has not been widely used, except for the most critical applications. Key generation, distribution and management are critical for those applications. Loose inspiration from the one-time pad: A stream cipher makes use of a much smaller and more convenient key such as 128 bits. Based on this key, it generates a pseudorandom keystream which can be combined with the plaintext digits in a similar fashion to the one-time pad. However, this comes at a cost. The keystream is now pseudorandom and so is not truly random. The proof of security associated with the one-time pad no longer holds. It is quite possible for a stream cipher to be completely insecure. Types: A stream cipher generates successive elements of the keystream based on an internal state. This state is updated in essentially two ways: if the state changes independently of the plaintext or ciphertext messages, the cipher is classified as a synchronous stream cipher. By contrast, self-synchronising stream ciphers update their state based on previous plaintext or ciphertext digits. A system that incorporates the plaintext into the key is also known as an autokey or autoclave cipher. Types: Synchronous stream ciphers In a synchronous stream cipher a stream of pseudorandom digits is generated independently of the plaintext and ciphertext messages, and then combined with the plaintext (to encrypt) or the ciphertext (to decrypt). In the most common form, binary digits are used (bits), and the keystream is combined with the plaintext using the exclusive or operation (XOR). This is termed a binary additive stream cipher. Types: In a synchronous stream cipher, the sender and receiver must be exactly in step for decryption to be successful. If digits are added or removed from the message during transmission, synchronisation is lost. To restore synchronisation, various offsets can be tried systematically to obtain the correct decryption. Another approach is to tag the ciphertext with markers at regular points in the output. Types: If, however, a digit is corrupted in transmission, rather than added or lost, only a single digit in the plaintext is affected and the error does not propagate to other parts of the message. This property is useful when the transmission error rate is high; however, it makes it less likely the error would be detected without further mechanisms. Moreover, because of this property, synchronous stream ciphers are very susceptible to active attacks: if an attacker can change a digit in the ciphertext, they might be able to make predictable changes to the corresponding plaintext bit; for example, flipping a bit in the ciphertext causes the same bit to be flipped in the plaintext. Types: Self-synchronizing stream ciphers Another approach uses several of the previous N ciphertext digits to compute the keystream. Such schemes are known as self-synchronizing stream ciphers, asynchronous stream ciphers or ciphertext autokey (CTAK). The idea of self-synchronization was patented in 1946 and has the advantage that the receiver will automatically synchronise with the keystream generator after receiving N ciphertext digits, making it easier to recover if digits are dropped or added to the message stream. Single-digit errors are limited in their effect, affecting only up to N plaintext digits. Types: An example of a self-synchronising stream cipher is a block cipher in cipher feedback (CFB) mode. Based on linear-feedback shift registers: Binary stream ciphers are often constructed using linear-feedback shift registers (LFSRs) because they can be easily implemented in hardware and can be readily analysed mathematically. The use of LFSRs on their own, however, is insufficient to provide good security. Various schemes have been proposed to increase the security of LFSRs. Non-linear combining functions Because LFSRs are inherently linear, one technique for removing the linearity is to feed the outputs of several parallel LFSRs into a non-linear Boolean function to form a combination generator. Various properties of such a combining function are critical for ensuring the security of the resultant scheme, for example, in order to avoid correlation attacks. Clock-controlled generators Normally LFSRs are stepped regularly. One approach to introducing non-linearity is to have the LFSR clocked irregularly, controlled by the output of a second LFSR. Such generators include the stop-and-go generator, the alternating step generator and the shrinking generator. Based on linear-feedback shift registers: An alternating step generator comprises three LFSRs, which we will call LFSR0, LFSR1 and LFSR2 for convenience. The output of one of the registers decides which of the other two is to be used; for instance, if LFSR2 outputs a 0, LFSR0 is clocked, and if it outputs a 1, LFSR1 is clocked instead. The output is the exclusive OR of the last bit produced by LFSR0 and LFSR1. The initial state of the three LFSRs is the key. Based on linear-feedback shift registers: The stop-and-go generator (Beth and Piper, 1984) consists of two LFSRs. One LFSR is clocked if the output of a second is a 1, otherwise it repeats its previous output. This output is then (in some versions) combined with the output of a third LFSR clocked at a regular rate. Based on linear-feedback shift registers: The shrinking generator takes a different approach. Two LFSRs are used, both clocked regularly. If the output of the first LFSR is 1, the output of the second LFSR becomes the output of the generator. If the first LFSR outputs 0, however, the output of the second is discarded, and no bit is output by the generator. This mechanism suffers from timing attacks on the second generator, since the speed of the output is variable in a manner that depends on the second generator's state. This can be alleviated by buffering the output. Based on linear-feedback shift registers: Filter generator Another approach to improving the security of an LFSR is to pass the entire state of a single LFSR into a non-linear filtering function. Other designs: Instead of a linear driving device, one may use a nonlinear update function. For example, Klimov and Shamir proposed triangular functions (T-functions) with a single cycle on n-bit words. Security: For a stream cipher to be secure, its keystream must have a large period, and it must be impossible to recover the cipher's key or internal state from the keystream. Cryptographers also demand that the keystream be free of even subtle biases that would let attackers distinguish a stream from random noise, and free of detectable relationships between keystreams that correspond to related keys or related cryptographic nonces. That should be true for all keys (there should be no weak keys), even if the attacker can know or choose some plaintext or ciphertext. Security: As with other attacks in cryptography, stream cipher attacks can be certificational so they are not necessarily practical ways to break the cipher but indicate that the cipher might have other weaknesses. Securely using a secure synchronous stream cipher requires that one never reuse the same keystream twice. That generally means a different nonce or key must be supplied to each invocation of the cipher. Application designers must also recognize that most stream ciphers provide not authenticity but privacy: encrypted messages may still have been modified in transit. Security: Short periods for stream ciphers have been a practical concern. For example, 64-bit block ciphers like DES can be used to generate a keystream in output feedback (OFB) mode. However, when not using full feedback, the resulting stream has a period of around 232 blocks on average; for many applications, the period is far too low. For example, if encryption is being performed at a rate of 8 megabytes per second, a stream of period 232 blocks will repeat after about a half an hour.Some applications using the stream cipher RC4 are attackable because of weaknesses in RC4's key setup routine; new applications should either avoid RC4 or make sure all keys are unique and ideally unrelated (such as generated by a well-seeded CSPRNG or a cryptographic hash function) and that the first bytes of the keystream are discarded. Security: The elements of stream ciphers are often much simpler to understand than block ciphers and are thus less likely to hide any accidental or malicious weaknesses. Usage: Stream ciphers are often used for their speed and simplicity of implementation in hardware, and in applications where plaintext comes in quantities of unknowable length like a secure wireless connection. If a block cipher (not operating in a stream cipher mode) were to be used in this type of application, the designer would need to choose either transmission efficiency or implementation complexity, since block ciphers cannot directly work on blocks shorter than their block size. For example, if a 128-bit block cipher received separate 32-bit bursts of plaintext, three quarters of the data transmitted would be padding. Block ciphers must be used in ciphertext stealing or residual block termination mode to avoid padding, while stream ciphers eliminate this issue by naturally operating on the smallest unit that can be transmitted (usually bytes). Usage: Another advantage of stream ciphers in military cryptography is that the cipher stream can be generated in a separate box that is subject to strict security measures and fed to other devices such as a radio set, which will perform the XOR operation as part of their function. The latter device can then be designed and used in less stringent environments. Usage: ChaCha is becoming the most widely used stream cipher in software; others include: RC4, A5/1, A5/2, Chameleon, FISH, Helix, ISAAC, MUGI, Panama, Phelix, Pike, Salsa20, SEAL, SOBER, SOBER-128, and WAKE. Trivia: United States National Security Agency documents sometimes use the term combiner-type algorithms, referring to algorithms that use some function to combine a pseudorandom number generator (PRNG) with a plaintext stream.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strominger's equations** Strominger's equations: In heterotic string theory, the Strominger's equations are the set of equations that are necessary and sufficient conditions for spacetime supersymmetry. It is derived by requiring the 4-dimensional spacetime to be maximally symmetric, and adding a warp factor on the internal 6-dimensional manifold.Consider a metric ω on the real 6-dimensional internal manifold Y and a Hermitian metric h on a vector bundle V. The equations are: The 4-dimensional spacetime is Minkowski, i.e., g=η The internal manifold Y must be complex, i.e., the Nijenhuis tensor must vanish N=0 The Hermitian form ω on the complex threefold Y, and the Hermitian metric h on a vector bundle V must satisfy, Tr Tr R−(ω)∧R−(ω), ln ||Ω||, where R− is the Hull-curvature two-form of ω , F is the curvature of h, and Ω is the holomorphic n-form; F is also known in the physics literature as the Yang-Mills field strength. Li and Yau showed that the second condition is equivalent to ω being conformally balanced, i.e., d(||Ω||ωω2)=0 The Yang–Mills field strength must satisfy, ωab¯Fab¯=0, 0. Strominger's equations: These equations imply the usual field equations, and thus are the only equations to be solved. Strominger's equations: However, there are topological obstructions in obtaining the solutions to the equations; The second Chern class of the manifold, and the second Chern class of the gauge field must be equal, i.e., c2(M)=c2(F) A holomorphic n-form Ω must exists, i.e., hn,0=1 and c1=0 .In case V is the tangent bundle TY and ω is Kähler, we can obtain a solution of these equations by taking the Calabi–Yau metric on Y and TY Once the solutions for the Strominger's equations are obtained, the warp factor Δ , dilaton ϕ and the background flux H, are determined by constant ln constant ,H=i2(∂¯−∂)ω.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ALX3** ALX3: The ALX3 gene, also known as aristaless-like homeobox 3, is a protein coding gene that provides instructions to build a protein which is a member of the homeobox protein family. This grouping regulates patterns of anatomical development. The gene encodes a nuclear protein that functions as a transcription regulator involved in cell-type differentiation and development. ALX3: The ALX3 protein, encoded by the gene, is a transcription factor, meaning that it binds to DNA and obtains control over the action of other genes. The ALX3 protein specifically controls genes that regulate cell growth, proliferation, and migration. This protein is essential for the development of the head and face, specifically the nose. This event begins around the fourth week of development. ALX3: At least 7 mutations in the ALX3 gene are known to cause frontonasal dysplasia. The mutations eliminate the function of the ALX3 protein, resulting in decreased ability to bind to DNA. The loss of regulatory function results in uncontrolled cell proliferation and migration during fetal development. One particular form of the disorder, called frontonasal dysplasia type 1, presents with abnormal development of structures in the middle of the face. The most common malformation of this defect is a cleft in the nose, lip, and palate.ALX3 was first discovered by a group of scientists, led by Hopi Hoekstra, a biologist from Harvard University, that investigated how stripe patterns form in animals. They investigated the Rhabdomys pumiliom, commonly known as the African striped mouse because of the alternating colored stripes observed on its back. One of the members of the team, Ricardo Mallarino, discovered that the stripes were formed during embryogenesis in the mice. Melanocytes, the specialized cells that produce the pigments in the skin, were not active in areas where the lighter stripes were observed. They then researched the genes active in those areas using RNA sequencing. They discovered that ALX3 was expressed in the light hair areas but not in the dark hair areas. They found that all mice expressed the gene on their abdomen but only the African striped mouse expressed it on its back, hence why the strips appear. Protein-DNA binding was then performed to determine where the ALX3 protein binds on the DNA. ALX3 binds to the promoter and represses MITF, which allows transcription to take place when making melanocytes. More tests were performed to confirm the function of ALX3 within the African striped mice. The gene was observed in other rodents such as the North American chipmunks and deemed responsible for the similar outcomes. The differences in evolution amongst the species did not hinder the similarities in the expression of the gene. This led the team to believe that ALX3 may have the same effect in mammals. However, further studies must be completed to confirm that ALX3 is responsible for the same in other mammals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geordie lamp** Geordie lamp: The Geordie lamp was a safety lamp for use in flammable atmospheres, invented by George Stephenson in 1815 as a miner's lamp to prevent explosions due to firedamp in coal mines. Origin: In 1815, Stephenson was the engine-wright at the Killingworth Colliery in Northumberland and had been experimenting for several years with candles close to firedamp emissions in the mine. In August he ordered an oil lamp which was delivered on 21 October and tested by him in the mine in the presence of explosive gases. He improved this over several weeks with the addition of capillary tubes at the base so that it gave more light and tried new versions on 4 and 30 November. This was presented to the Literary and Philosophical Society of Newcastle upon Tyne (Lit & Phil) on 5 December 1815.Although controversy arose between Stephenson's design and the Davy lamp (invented by Humphry Davy in the same year), Stephenson's original design worked on significantly different principles from Davy's final design. If the lamp were sealed except for a restricted air ingress (and a suitably sized chimney) then the presence of dangerous amounts of firedamp in the incoming air would (by its combustion) reduce the oxygen concentration inside the lamp so much that the flame would be extinguished. Stephenson had convinced himself of the validity of this approach by his experiments with candles near lit blowers: as lit candles were placed upwind of the blower, the blower flame grew duller; with enough upwind candles the blower flame went out.To guard against the possibility of a flame travelling back through the incoming gases (an explosive backblast), air ingress was by a number of small-bore tubes through which the ingress air flowed at a higher velocity than the velocity of a flame fueled by a mixture of firedamp (mostly methane) and air. These ingress tubes were physically separate from the exhaust chimney. The body of the lamp was lengthened to give the flame a greater convective draw, and thus allow a greater inlet flow restriction and make the lamp less sensitive to air currents. The lamp itself was surrounded by glass which had an additional perforated metal tube surrounding it for protection. Davy had originally attempted a safety lamp on similar principles, before preferring to enclose the flame inside a brass gauze cylinder; he had publicly identified the importance of allowing the restricted airflow in through small orifices (in which the flame velocity is lower) before Stephenson had, and he and his adherents remained convinced that Stephenson had not made this discovery independently. Later on, Stephenson adopted Davy's gauze to surround the lamp (instead of the perforated metal tube) and the intake tubes were changed to holes or a gallery at the base of the lamp. It was this revised design that was used for most of the 19th century as the Geordie lamp. Origin: One advantage of Stephenson's initial design over Davy's was that if the proportion of firedamp became too high, his lamp would be extinguished, whereas Davy's lamp could become dangerously hot. This was illustrated in the Oaks colliery at Barnsley on 20 August 1857 where both types of lamp were in use.Stephenson's design also allowed better light output as it used glass to surround the flame, which cut out less of the light than Davy's, where the gauze surrounded it. But this also posed the danger of breakage in the harsh conditions of mineworking, a problem which was not resolved until the invention of safety glass. Origin: The Geordie lamp continued to be used in the north-east of England through most of the 19th century, until the introduction of electric lighting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RNASEH2C** RNASEH2C: Ribonuclease H2 subunit C is a protein that in humans is encoded by the RNASEH2C gene. RNase H2 is composed of a single catalytic subunit (A) and two non-catalytic subunits (B and C), and degrades the RNA of RNA:DNA hybrids. Mutations in this gene are a cause of Aicardi-Goutieres syndrome type 3 (AGS3). Function: This gene encodes a ribonuclease H subunit that can cleave ribonucleotides from RNA:DNA duplexes. Mutations in this gene cause Aicardi-Goutieres syndrome-3, a disease that causes severe neurologic dysfunction. A pseudogene for this gene has been identified on chromosome Y, near the sex determining region Y (SRY) gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phospho soda** Phospho soda: Phospho soda was an over the counter saline laxative produced by the C.B. Fleet Company in Lynchburg, Va. Phospho soda consisted mostly of monobasic sodium phosphate monohydrate and dibasic sodium phosphate heptahydrate. Phospho soda is often taken in a double dose (the usual 45ml dose, followed by a second 45ml dose 6 hours later), to prepare for colonoscopy. It is still used outside the US. Phospho soda: An amount of Phospho soda (normally 1.5 fluid ounce or 45 ml) is usually mixed with water or other clear liquids such as ginger ale. This preparation usually results in a bowel movement anywhere from 30 minutes to 6 hours after it is taken. Phospho soda is also available in various flavors to make it more palatable. Safety issues: The use of Phospho soda has been known to lead to acute phosphate nephropathy. According to the U.S. Food and Drug Administration (FDA), "Acute phosphate nephropathy is a form of acute kidney injury that is associated with deposits of calcium-phosphate crystals in the renal tubules that may result in permanent renal function impairment. Acute phosphate nephropathy is a rare, serious adverse event that has been associated with the use of OSPs [oral sodium phosphates]. The occurrence of these events was previously described in an Information for Healthcare Professionals sheet and an FDA Science Paper issued in May 2006. Additional cases of acute phosphate nephropathy have been reported to FDA and described in the literature since these were issued."Fleet’s Phospho-soda products have been linked to kidney damage since the 1990s. Safety issues: On December 11, 2008, the FDA issued a Safety Alert stating that "FDA has become aware of reports of acute phosphate nephropathy, a type of acute kidney injury, associated with the use of oral sodium phosphate products (OSP) for bowel cleansing prior to colonoscopy or other procedures. These products include the prescription products, Visicol and OsmoPrep, and OSPs available over-the-counter without a prescription as laxatives (e.g., Fleet Phospho-soda). In some cases when used for bowel cleansing, these serious adverse events have occurred in patients without identifiable factors that would put them at risk for developing acute kidney injury ... The agency is equally concerned about the risks associated with the use of OSP products that are available over-the-counter (OTC), for example, Fleet Phospho-soda, when used at higher doses for bowel cleansing." Recall: Following FDA's Alert, C.B. Fleet recalled its Fleet Phospho-Soda Products. Use as a laxative: Phospho soda can be used for a general laxative, but is not recommended. The dosage then is best cut in half and used only once instead of twice. Phospho soda works by drawing liquid from the body into the colon, therefore it can cause severe dehydration, especially if not used properly. Usage in this context is highly recommended to be performed only with a doctor's knowledge and consent. Use as preparation for colonoscopy: When Phospho soda is used as preparation for colonoscopy, 1.5 fluid ounces (45ml), mixed with an equal amount of water or any clear liquid and followed by 8 oz of water, is taken, followed by a second dose 6 hours later (3 oz total). It will cause very loose, eventually watery stools, usually starting within an hour or so and lasting several hours. Use as preparation for colonoscopy: A 2007 study showed that in patients with decreased renal function, Phospho soda may worsen renal impairment compared to polyethylene glycol-based laxatives. In patients without kidney problems, no difference was observed. Litigation: Since late 2004, there has been a great deal of litigation in both state and federal courts alleging renal injury following the use of Fleet Phospho-Soda. On June 23, 2009, the United States Judicial Panel on Multi District Litigation consolidated all federal Oral Sodium Phosphate Solution lawsuits to the Northern District of Ohio, before the Honorable Ann Aldrich. Additional documents from Phospho-Soda lawsuits have been published at the website DangerousDrugs.us.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long-range penetration** Long-range penetration: A long-range penetration patrol, group, or force is a special operations unit capable of operating long distances behind enemy lines far away from direct contact with friendly forces as opposed to a Long Range Reconnaissance Patrol, a small group primarily engaged in scouting missions. History: Though the concept of long range penetration is as old as war itself, in the modern era it is recognized as starting with Major Ralph Alger Bagnold with his 1940 Long Range Desert Group (LRDG) in the Western Desert. The LRDG carried out operations of reconnaissance and sabotage far behind the enemy's lines in the Libyan Desert. Bagnold was an experienced desert explorer who had his LRDG trained in desert driving, navigation through using the sun and stars as well as a compass, and knowing their territory. They were supplied by all the equipment that their trucks could carry. History: In 1942, several British Special Operations Executive (SOE) personnel who had escaped from Singapore to Australia, formed the Allied Services Reconnaissance Department (SRD) for special operations in the South West Pacific theatre. Z Special Unit ("Z Force") was organised under its auspices to conduct commando-style operations behind Japanese lines. The long distances to all potential targets made unconventional, long-range penetration tactics a requirement for Z Force. It recruited Australian, British, New Zealand and Dutch East Indies personnel and, later, amongst indigenous resistance fighters. In Operation Jaywick (September 1943), a detachment led by Captain Ivan Lyon travelled on a small Indonesian fishing boat, from Australia to the vicinity of Singapore, where folding kayaks were used to approach ships and attach limpet mines. These sank or seriously damaged 39,000 tons of shipping, as the raiders returned to Australia. In September 1944, Lyon led a second raid on Singapore, Operation Rimau, which resulted in the deaths of the entire raiding force. During 1943–45, other Z Force operatives conducted intelligence gathering and guerilla operations throughout the Southwest Pacific, including preparations for Allied landings in the Philippines and Borneo campaign.Brigadier Orde Wingate, a professional soldier famous for his unconventional behavior and ideas, had created and led guerrilla units in Palestine and Ethiopia, before being transferred, in 1942, to the South East Asian theatre. Wingate had ideas of deep penetration operations that could be made possible through improvements in the range of communication devices and airborne supply by long range aircraft. At the Quebec Conference in 1943, Wingate explained his ideas to Winston Churchill, Franklin D. Roosevelt, and many other leaders. Wingate proposed creating strongholds in enemy territory that would be supplied by air and be as effective against the enemy as conventional troops. Wingate was given command of the 77th Indian Infantry Brigade that acquired the name of Chindit from a suggestion by Captain Aung Thin of the Burma Rifles. The name was a corruption of the mythical beast that guards Buddhist temples called 'Chinthé' or 'Chinthay'. The unit was supported by the United States Army Air Forces 1st Air Commando Group and carried out two major operations. The first was entering Burma on a 200-mile mission in February 1943 with 3,000 troops, with mules and some elephants for the carrying of supplies. Wingate thought the operation a success, but Field Marshal William Slim thought the operation a failure.In 1943, General Joseph Stilwell requested the deployment of US Army special forces to support the Chinese Army regular forces under his command. General George Marshall authorised a "Long Range Penetration Force", recruited from US Army troops trained in jungle warfare in Panama and the continental United States, as well as personnel with recent combat experience in the Solomon Islands and New Guinea campaigns. The unit was formally named the 5307th Composite Unit (Provisional), but became famous as "Merrill's Marauders"; it carried out operations in Burma in 1944. Post World War II: After World War II, long range penetration operations were primarily conducted by small units of men often varying in size from five to thirty men. Sabotage, surveillance, and seizure of strategic locations were the primary objective carried out deep behind enemy lines. Most notable are the British Special Air Service (SAS), The Israeli Sayeret Matkal, the Australian Special Air Service Regiment, the New Zealand Special Air Service (NZSAS), the Rhodesian Special Air Service, the South African's 32nd Battalion operations after the Angolan Civil War, and the Sri Lanka Army long range penetration units operations during the Sri Lanka Civil War. Vietnam War: In April 1968 members of the 2nd Platoon, Company E, 52nd Infantry, 1st Air Cavalry Division, Long Range Reconnaissance Patrol (LRP), commanded by Captain Michael Gooding and Lieutenant Joseph Dilger, conducted one of the most daring long-range penetration operations of the Vietnam War when they seized the strategic 4,879-foot mountain peak of Dong Re Lao Mountain, dubbed "Signal Hill" by headquarters during Operation Delaware. Signal Hill was deep in enemy territory in the heavily fortified A Shau Valley bordering Laos. After intense fighting against troops of the North Vietnamese Army, the mountaintop was secured, providing a vital communications relay site and fire support base for massive air assault operations to proceed in the valley by the 1st and 3rd Brigades, 1st Air Cavalry Division. Since satellite communications were a thing of the future, those brigades, hidden deep behind the towering wall of mountains would have been unable to communicate with headquarters near the coast at Camp Evans or with approaching aircraft.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Christocentric** Christocentric: Christocentric is a doctrinal term within Christianity, describing theological positions that focus on Jesus Christ, the second person of the Christian Trinity, in relation to the Godhead/God the Father (theocentric) or the Holy Spirit (pneumocentric). Christocentric theologies make Christ the central theme about which all other theological positions/doctrines are oriented. Augustinism: Certain theological traditions within the Christian Church can be described as more heavily Christocentric. Notably, the teachings of Augustine of Hippo and Paul of Tarsus, which have been very influential in the West, place a great emphasis on the person of Jesus in the process of salvation. Augustinism: For instance, in Reformation theology, the Lutheran tradition is seen as more theologically Christocentric, as it places its doctrine of justification by grace, which is primarily a Christological doctrine, at the center of its thought. Meanwhile, the Calvinist tradition is seen as more theologically theocentric, as it places its doctrine of the sovereignty of God ("the Father") at the center. John Duns Scotus: Scotus is famous for his belief in the Absolute Primacy of Christ, whereby Christ would have become incarnate even had the Fall never taken place. Scotus writes "that God predestined this soul [of Christ] to so great a glory does not seem to be only on account of [redemption], since the redemption or the glory of the soul to be redeemed is not comparable to the glory of Christ’s soul. Neither is it likely that the highest good in creation is something that was merely occasioned only because of some lesser good; nor is it likely that He predestined Adam to such good before He predestined Christ; and yet this would follow [were the Incarnation occasioned by Adam’s sin]. In fact, if the predestination of Christ’s soul was for the sole purpose of redeeming others, something even more absurd would follow, namely, that in predestining Adam to glory, He would have foreseen him as having fallen into sin before He predestined Christ to glory". As such, Scotus' theology is grounded in the claim that Creation exists for the sake of Christ, regardless of whether any individual chooses to sin. John Paul II: John Paul II's magisterium has been called Christocentric by Catholic theologians. He further taught that the Marian devotions of the Rosary were in fact Christocentric because they brought the faithful to Jesus through Mary. Biblical hermeneutics: The christocentric principle is also commonly used for biblical hermeneutics. Interfaith and ecumenism: Christocentrism is also a name given to a particular approach in interfaith and ecumenical dialogue. It teaches that Christianity is absolutely true, but the elements of truth in other religions are always in relation to the fullness of truth found in Christianity. The Holy Spirit is thought to allow inter-religious dialogue and to influence non-believers in their journey to Christ. This view is notably advocated by the Catholic Church in the declarations Nostra aetate, Unitatis Redintegratio and Dominus Iesus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xynth** Xynth: Xynth is an embedded windowing system, released under LGPL, developed for systems with low resources, is an alternative for X Window System. The goal of the project is to release a soft but portable and powered Window Environment. The source language is C. A fork of the project exists as XFast. Architecture: Xynth act as interface between hardware and the desktop environment, working on much hardware, including embedded devices. Features: UDS (Unix Domain Sockets) for IPC DMA (Direct Memory Access) for each client window surface Overlapped client window - server management 8-way Move, Resize Runtime Theme Pluging Support Built-in image renderer xpm, png Antialiased fonts with Freetype Library. No dependencies except FBDev or SVGALib Device independent basic low-level graphics library Overlay Drawing Ability Anti Flicker Double Buffer Rendering Keyboard, Mouse, Touchscreen drivers Remote Desktop Support. Built-in window manager. Low Memory and CPU Usage and Foot Print. In 1024x768x32bits mode with 253 clients open Memory usage is ~2,5M Static linked binary is of ~125K size.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Postural drainage** Postural drainage: Postural drainage (PD) is the drainage of lung secretions using gravity. It is used to treat a variety of conditions that cause the build-up of secretions in the lungs. Uses: Postural drainage is used to treat any condition that causes the build-up of secretions in bronchopulmonary segments. These include: bronchiectasis lung abscesses cystic fibrosis atelectasis chronic obstructive pulmonary disease (COPD) pneumonia postoperative lung damage (after some thoracic surgery) Covid 19Patients must receive physiotherapy to learn to tip themselves into a position in which the lobe can be drained. Contraindications: Postural drainage is often not suitable for infants in the neonatal intensive care unit, who may have lots of equipment attached to them. Postural drainage is more difficult if patients experience poor mobility, poor posture, pain, anxiety, and skin damage, usually requiring adaptations to the technique. Trendelenburg position which is head down position is relatively contraindicated in patients who have uncontrolled hypertension, orthopnea, recent gross hemoptysis, patients having intracranial pressure more than 20 mm Hg. Precautions should be taken with the patients who have rib fractures, osteoporosis, bronchospasm, and recent transplants. Risks: Postural drainage is considered safe and effective, but may cause some side effects. The procedure is discontinued if the patient complains of headache, discomfort, dizziness, palpitations, fatigue, or dyspnea. Patients may be dyspneic after the various maneuvers, since the head-down position increases the work of breathing, reduces tidal volume, and decreases functional residual capacity (FRC). Technique: In postural drainage, the patient's body is positioned so that the trachea is inclined downward and below the affected chest area. The body is positioned so that secretions drain into sequentially larger bronchi. Frames, tilt tables, and pillows may be used to support patients in these positions. Up to 12 postures may be used. Patients may need time to adapt to certain postures.Postural drainage is done at least three times daily for up to 60 minutes, with 30 minutes being common. It can be done in the night to reduce coughing at night (although PD should be avoided after meals), or in the morning to clear secretions accumulated during the night. Bronchodilators can be used 15 minutes before PD is done to maximise its benefits. The most affected area is drained first to prevent infected secretions spilling into healthy lung. Drainage time varies, but each position requires 10 minutes. If an entire hemithorax is involved, each lobe has to be drained individually, but a maximum of three position per session is considered sufficient. Use with other physiotherapies: Postural drainage is often used in conjunction with a technique for loosening secretions in the chest cavity such as chest percussion. Chest percussion is performed by clapping the back or chest with a cupped hand. Bronchodilator medications may also be used before postural drainage to improve its effectiveness. Alternatively, a mechanical vibrator may be used in some cases to facilitate loosening of secretions. There are drainage positions for all segments of the lung. These positions can be modified depending on the patient's condition. Use with other physiotherapies: Postural drainage may be followed by breathing exercises to help expel loosened secretions from the airway, and coughing exercises to expel secretions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nature therapy** Nature therapy: Nature therapy, sometimes referred to as ecotherapy, forest therapy, forest bathing, grounding, earthing, Shinrin-Yoku or Sami Lok, is a practice that describes a broad group of techniques or treatments using nature to improve mental or physical health. Spending time in nature has various physiological benefits such as relaxation and stress reduction. Additionally, it can enhance cardiovascular health and lower blood pressure. History: In the 6th century BCE, Cyrus the Great planted a garden in the middle of a city to increase human health. In the 16th century CE, Paracelsus wrote: "The art of healing comes from nature, not from the physician." Scientists in the 1950s looked into why people chose to spend time in nature. The term Shinrin-yoku (森林浴) or forest bathing was coined by the head of the Japanese Ministry of Agriculture, Forestry, and Fisheries, Tomohide Akiyama, in 1982 to encourage more visitors to forests. Health effects: Mood 120 minutes in nature weekly could improve health and well-being. As little as five minutes in a natural setting, improves mood, self-esteem, and motivation. Nature therapy has a benefit in reducing stress and improving a person's mood. People exposed to nature are also more cooperative and pleasant compared to those who are not.Forest therapy has been linked to some physiological benefits as indicated by neuroimaging and the Profile of mood states psychological test.Horticulture therapy has been linked to general well-being by boosting positive mood and escaping from daily life stressors. Health effects: Stress and depression Interaction with nature can decrease stress and depression. Forest therapy might help stress management for all age groups.Social horticulture could help with depression and other mental health problems of PTSD, abuse, lonely elderly people, drug or alcohol addicts, blind people, and other people with special needs. Nature therapy could also improve self-management, self-esteem, social relations and skills, socio-political awareness and employability. Nature therapy could reduce aggression and improve relationship skills.This is especially true due to the mental health damage COVID-19 brought. Nature therapy had significant results when it came to reducing stress, anxiety, and depression influenced by COVID-19. Health effects: Other possible benefits Nature therapy could help with general medical recovery, pain reduction, Attention Deficit/Hyperactivity Disorder, dementia, obesity, and vitamin D deficiency. Interactions with nature environments enhance social connections, stewardship, sense of place, and increase environmental participation. Connecting with nature also addresses needs such as intellectual capacity, emotional bonding, creativity, and imagination. Overall, there seems to be benefits to time spent in nature including memory, cognitive flexibility, and attention control.Research also suggests that childhood experience in nature are crucial for children in their daily lives as it contributes to several developmental outcomes and various domains of their well-being. Essentially, these experiences also foster an intrinsic care for nature. Criticism: A 2012 systematic review study showed inconclusive results related to the methodology used in studies. Spending time in forests demonstrated positive health effects, but not enough to generate clinical practice guidelines or demonstrate causality. Additionally, there are concerns from researchers expressing that time spent in nature as a form of regenerative therapy is highly personal and entirely unpredictable. Nature can be harmed in the process of human interaction. Criticism: Grounding Grounding, or earthing, is a pseudoscientific practice that involves people grounding themselves using devices by touching the earth or removing shoes. People who ground themselves believe that they have been exposed to high levels of electromagnetic radiation. Possible changes in mood could be due to a placebo effect. Governmental support: In Finland, researchers recommend five hours a month in nature to reduce depression, alcoholism, and suicide. Forest therapy has state-backing in Japan. South Korea has a nature therapy program for firefighters with post-traumatic stress disorder. Canadian physicians can also "prescribe nature" to patients with mental and physical health problems encouraging them to get into nature more.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kleptotrichy** Kleptotrichy: Kleptotrichy is the stealing of mammal hair by birds for use in their nests. The phenomenon was first defined scientifically in a journal article published in July 2021. Scientists largely studied it from videos posted to YouTube.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interactive children's book** Interactive children's book: Interactive children's books are a subset of children's books that require participation and interaction by the reader. Participation can range from books with texture to those with special devices used to help teach children certain tools. Interactive children's books may also incorporate modern technology or be computerized books. Movable books, a subsection of interactive books, are defined as "covering pop-ups, transformations, tunnel books, volvelles, flaps, pull-tabs, pop-outs, pull-downs, and more, each of which performs in a different manner. Also included, because they employ the same techniques, are three-dimensional greeting cards." Volvelles: The earliest form of interactive books are thought to be volvelles, a type of movable book with a wheel, which at the time was used to help display astrological and geographical maps. Volvelles were a type of early paper calculators that were designed in a form of a circle layered over each other and tied together with a string in order to spin. Coloring books: The coloring book promotes motor skills, development, and eye-hand coordination in early childhood. Gamebooks: Gamebooks are much like traditional books but require the reader to make decisions throughout the book that affect the outcome of the story. At each decision point, the reader is instructed to go to a particular page and/or paragraph to continue the story. The first gamebook debuted in 1941. The format was especially popular in the 1980s. Hidden object and picture books: Hidden object picture books engage readers of all ages by camouflaging items with the intention of children eventually finding them. Whether the hidden object is a hard-to-spot character or an item specified by the author in a rhyming list, is subject to the book or possibly the series of books it belongs to. Although it is not standard, these types of interactive children's books are sometimes published with a common theme such as Christmas or life on the farm. Children can interactively experience a selective number of these books as early as age four and beginning at a pre-kindergarten grade level, depending on how easily the hidden objectives can be located. There are several notable authors and illustrators at the frontline assisting their audiences’ development of interactive reading skills in hidden object picture books: Martin Handford Where’s Wally? British illustrator Martin Handford is credited with the conception of the Where's Wally? series. Despite the series christened title, his hidden picture books are more recognizable under the North American franchise's version of the character, Waldo. The purpose of Handford's hidden object picture books is for children of all ages to identify Wally in a specified location throughout his “world-wide hike.” Although various activities and outfit similarities easily camouflage the character's whereabouts, Wally always wears glasses and carries a walking stick and is famous for his outfit of a red and white horizontally striped shirt, blue trousers and a bobble hat. Hidden object and picture books: The first book in Handford's series, originally titled Where's Wally?, was published in 1987. The book was soon followed by the release of Where's Wally Now? (1988) and Where's Wally?: The Fantastic Journey (1989). The books became extremely popular and were translated into many languages. The trademark of Wally was adopted in 28 countries and the character is often given a different name and personality in the translations. Hidden object and picture books: As more books were released the cast of characters grew as well - including Wizard Whitebeard, Wilma, Wenda, Woof, Odlaw and the Waldo Watchers. More Waldo books followed - such as Where's Waldo in Hollywood?, Where's Waldo?: The Wonder Book (1997), Where's Waldo?: The Great Picture Hunt (2006). Waldo became a huge pop culture sensation in the early 1990s. The United States, in particular, was swept with "Waldo-mania". Aside from the adaptations of Handford's books, the franchises grew to include licensing of Waldo for video games, spin-off books, magazines, dolls, toys, comics and a Where's Waldo? (TV series). Wally has his own website where he dispatches messages to fans and invites them to join in on the chase through different social networks. Jean Marzollo and Walter Wick I Spy I Spy is another interactive children's book series that can be categorized as a hidden object picture book. Debuting in 1992, the books consist of texts written by Jean Marzollo regarding items hidden within the photographs captured by Walter Wick. Wick's photographs are set up in a cluttered assortment of items or to imitate a particular scene, like the toy shop window in I Spy: Christmas (1992). Below the picture, Marzollo involves readers with a riddle asking them to locate specific items within Wick's photograph. Wick's photographs are highly regarded for their expressive quality. The series originated with I Spy: A Book of Picture Riddles (1992) and grew to include I Spy: Christmas (1992), I Spy: Fun House (1993), I Spy: Mystery (1993), I Spy: Fantasy (1994), I Spy: School Days (1995), I Spy: Spooky Night (1996), and I Spy: Treasure Hunt (1999). A subsequent and more challenging series was begun in 1997 with I Spy: Super Challenger! (1997) and was continued with other installments such as I Spy: Gold Challenger! (1998), I Spy: Extreme Challenger! (2000), I Spy: Year-Round Challenger! (2001), and I Spy: Ultimate Challenger! (2003). The I Spy label has grown to include video games based on the books such as I Spy Spooky Mansion, I Spy Treasure Hunt and I Spy Fantasy. Hidden object and picture books: The franchise also includes Ultimate I Spy, an I Spy game for the Wii. I Spy: Fun House is being developed into a Nintendo DS game. The player is trapped in the actual funhouse and must find nine items to escape.Walter Wick is also the author of his own hidden object series, similar to I Spy, called Can You See What I See?. These books feature photographs and poems that require readers to find objects in the picture. The puzzles are slightly easier than those of the I Spy books. Other hidden object books: Martin Handford, Jean Marzollo and Walter Wick are not the only three authors of hidden object picture books. However, they are the most established and recognized in the publishing world. Another author worth mentioning is Gillian Doherty. She is a published author and editor of children's books. Her hidden object picture books include 1001 Monster Things to Spot, 1001 Things to Spot, 1001 Wizard Things to Spot, and 1001 Things to Spot. Touch and feel books: In 1940, American writer Dorothy Kunhardt published Pat the Bunny, the first touch-and-feel book. It implores the reader to perform tactile texture-related tasks and imagine them in context, such as patting a cottontail, feeling stubble (sandpaper), and gazing into a mirror.Many interactive books are made specifically for children. Touch-and-feel books, or texturized books, fall in this area. The prime age for touch-and-feel books is from toddler age to preschool. Because these books are aimed specifically at helping children develop knowledge while increasing the use of their senses, the appeal is lost to older generations who more than likely already possess the skills being taught. One of the key advantages to teaching senses and vocabulary through the use of touch and feel books is the connection a child can gain by instantly being rewarded with the texture that the word describes. In recent years, touch-and-feel books have gone to a new level with the creation of fun new ways for younger children to interact with books such as musical "bath books" and "finger puppet books". Most, but not all, of these books, are also "board books", which are made entirely out of hard pages. Making the pages out of a hard material provides for durability, allowing them to resist whatever they may contact along with the young readers. Bath books can be taken into the tub because of their floatable and waterproof pages. A few of the top touch-and-feel book publishers are Dorling Kindersley, Usborne, Macmillan, and Lamaze. Touch and feel books: Examples of touch and feel books Pat the Bunny by Dorothy Kunhardt Usborne's " That's Not My..." series Dorling Kindersley's Touch and Feel series The Rainbow Fish, by Marcus Pfister Macmillan's Cloth Book Series by Roger Priddy "Little" Finger Puppet Book series. Chronicle Books and Imagebooks Staff. i.e. Little Puppy: Finger Puppet Book Pop-up books: Children's pop-up books are a form of interactive literature in which upon turning the page, an image literally “pops-up”. These books provide 3-D illustrations made of unfolding paper that allow for the child to feel as if the book is coming to life. First created in the mid-thirteenth century, they were originally not intended for children until publisher Robert Sayer created Harlquinade in 1765. With his creation of a “lift-the-flap” book, he gave children a way to truly become involved with what they are reading. From then, several other authors, such as William Grimaldi, designed their version of the pop-up book depicting elaborate scenes from page to page that allowed for the reader to determine the outcome of the story. Pop-up books: The pop-up book has evolved from a seemingly simplistic idea to one of more sophistication, as well as complication. They have grown to be a genre that delights, intrigues, and educates children of all ages. One key person in the pop-up book phenomenon is Waldo Hunt, who was the first to develop these books in the United States. He found joy and creativity in the idea of creating a pop-up image in a book. He was the true advocate and mastermind behind their conception and popularity. Pop-up books: Pop-up books are common enough to be found in your local library, bookstore, classroom, or likely even your own bookshelf. There are several examples of these kinds of books, but some famous ones to check out are: Christmas in New York by Chuck Fischer, Star Wars: A Pop-Up Guide to the Galaxy by Matthew Reinhart, and The Amazing Pop-Up Geography Book by Kate Petty. Pop-up books can range from very simplistic 3-D illustrations to more intricate and detailed presentations depending upon the topic of the book, its author and illustrator, as well as the age of the audience they are appealing to. Pop-up books: It wasn't until the late nineteenth century, partly due to the invention of industrial printing, that pop-ups were created. The first pop-up books published in America were those in the Showman Series published by the McLoughlin Brothers. Still, these books were too expensive and fragile to be practical as children's books. Pop-ups opened the door for the creation of many other types of interactive books for both children and adults. Despite a brief decline in production during the mid-twentieth century, it was a new idea that spawned quickly and eventually became the highly technological and advanced world of books that it is today. Digitized learning books: Many children's interactive books have been enhanced through the use of technology. The earliest examples of this were books that had sound effects- a bar on the side of the book that had buttons corresponding with pictures in the story. When the icon appeared in the story, the reader could press a button on the side to hear the sound effect. These are called “sound books.” Books that had accompanying cassette tapes (or even CDs), usually known as books on tape, are another early example of this. Digitized learning books: Once computers became more prevalent, CD-ROM versions of books became popular. These were programs that put books on the computer screen, enabling children to click their way through various words and pictures in the story and have it come alive. The technology was fairly limited, however, and not widespread as only children with access to a PC (and the knowledge to use it) could take advantage of it.The next big step in this technology was Leap Frog's Leap Pad. The Leap Pad makes regular books interactive by enabling children to hear a word aloud, have the story read to them, have words and sounds spelled for them, play interactive learning games on many pages and more, simply by touching the included digital “pen” to different places on the page. The system is divided into “Leap Levels” for different-aged children and includes everything from picture books to chapter books, with separate Leap Pads corresponding to each level. There is also a unit that allows new content to be downloaded from Leap Frog's Web site. The technology of the Leap Pad continued evolving, and Leap Frog next came out with the Tag (LeapFrog). Instead of a Pad unit where books must be inserted, the Tag system is essentially a “pen” onto which books can be downloaded. Then, the pen can be scanned across the corresponding book to read it aloud, unlock activities and more. Digitized learning books: The goal for these products is to help children get more out of their books and learn to read, according to Leap Frog. Leap Frog even has its own publishing company, Leap Frog Press, which creates books specifically designed for its system. The products are not cheap, though- the Leap Pad can cost as much as $80. The Tag is usually sold in gift packs that run anywhere from $20 to $75. Books for each are sold separately and typically cost $12 or more. Digitized learning books: Of course, Leap Frog is not the only company with products like the Leap Pad or Tag system that use technology to enhance the reading experience for children. However, it was one of the first, and now several companies have copied the idea and made similar products. Digitized learning books: The newest advance in interactive children's books reflects the recent popularity of Amazon's Kindle. There are now a plethora of e-book sites that place children's picture books, along with LeapFrog-like sound effects and word pronunciation, completely online-often for free. Some will actually read an entire story aloud. These "virtual libraries have done a lot to both preserve books and make them more available. Digitized learning books: Here are a few examples of some interactive e-book sites for children: Magic Keys Books Raz-Kids Books Tumble BooksEven older classic books are moving to online to keep up with the times.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video+** Video+: Video+ (or Video+ Player on Google Play) is a video player and downloader that is developed and operated by LEO Network. The developer describes it as a video "hunter" or "seeker" where to explore one’s interests and discover the neighborhood. Description: Video+ works as a sniffer to allow users to discover video collections from nearby people. It was featured on the location-based media sharing and discovering function. Location-based service (LBS) was first applied by the biggest and fastest growing Location-Based Social Network – Foursquare and became popular and widely used to Mobile app since 2009. Video+ has no registration requirements for the first login, users can share their video list under the “Share” function and discover other users’ lists under “Nearby”. In addition, users may find common-interest groups. Supported Formats: mkv, avi, flv, rm, rmvb, asf, asx, mov, mpe, ts, vob, wmv, f4v, vp, mpeg, mpg, m4v, mp4, 3gp, 3gpp, 3g2, 3gpp2 Technologies: According to Sem, one of LEO’s most contributed developers: Video+ was implemented with the self-developed “Air-Link Multi-connectional Wireless Transmission” technology, which combines Wi-Fi with Bluetooth and requires no internet connection during the downloading process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adipose tissue** Adipose tissue: Adipose tissue, body fat, or simply fat is a loose connective tissue composed mostly of adipocytes. In addition to adipocytes, adipose tissue contains the stromal vascular fraction (SVF) of cells including preadipocytes, fibroblasts, vascular endothelial cells and a variety of immune cells such as adipose tissue macrophages. Adipose tissue is derived from preadipocytes. Its main role is to store energy in the form of lipids, although it also cushions and insulates the body. Far from being hormonally inert, adipose tissue has, in recent years, been recognized as a major endocrine organ, as it produces hormones such as leptin, estrogen, resistin, and cytokines (especially TNFα). In obesity, adipose tissue is also implicated in the chronic release of pro-inflammatory markers known as adipokines, which are responsible for the development of metabolic syndrome, a constellation of diseases, including type 2 diabetes, cardiovascular disease and atherosclerosis. The two types of adipose tissue are white adipose tissue (WAT), which stores energy, and brown adipose tissue (BAT), which generates body heat. The formation of adipose tissue appears to be controlled in part by the adipose gene. Adipose tissue – more specifically brown adipose tissue – was first identified by the Swiss naturalist Conrad Gessner in 1551. Anatomical features: In humans, adipose tissue is located: beneath the skin (subcutaneous fat), around internal organs (visceral fat), in bone marrow (yellow bone marrow), intermuscular (Muscular system) and in the breast (breast tissue). Adipose tissue is found in specific locations, which are referred to as adipose depots. Apart from adipocytes, which comprise the highest percentage of cells within adipose tissue, other cell types are present, collectively termed stromal vascular fraction (SVF) of cells. SVF includes preadipocytes, fibroblasts, adipose tissue macrophages, and endothelial cells. Anatomical features: Adipose tissue contains many small blood vessels. In the integumentary system, which includes the skin, it accumulates in the deepest level, the subcutaneous layer, providing insulation from heat and cold. Around organs, it provides protective padding. However, its main function is to be a reserve of lipids, which can be oxidised to meet the energy needs of the body and to protect it from excess glucose by storing triglycerides produced by the liver from sugars, although some evidence suggests that most lipid synthesis from carbohydrates occurs in the adipose tissue itself. Adipose depots in different parts of the body have different biochemical profiles. Under normal conditions, it provides feedback for hunger and diet to the brain. Anatomical features: Mice Mice have eight major adipose depots, four of which are within the abdominal cavity. The paired gonadal depots are attached to the uterus and ovaries in females and the epididymis and testes in males; the paired retroperitoneal depots are found along the dorsal wall of the abdomen, surrounding the kidney, and, when massive, extend into the pelvis. The mesenteric depot forms a glue-like web that supports the intestines and the omental depot (which originates near the stomach and spleen) and - when massive - extends into the ventral abdomen. Both the mesenteric and omental depots incorporate much lymphoid tissue as lymph nodes and milky spots, respectively. Anatomical features: The two superficial depots are the paired inguinal depots, which are found anterior to the upper segment of the hind limbs (underneath the skin) and the subscapular depots, paired medial mixtures of brown adipose tissue adjacent to regions of white adipose tissue, which are found under the skin between the dorsal crests of the scapulae. The layer of brown adipose tissue in this depot is often covered by a "frosting" of white adipose tissue; sometimes these two types of fat (brown and white) are hard to distinguish. The inguinal depots enclose the inguinal group of lymph nodes. Minor depots include the pericardial, which surrounds the heart, and the paired popliteal depots, between the major muscles behind the knees, each containing one large lymph node. Of all the depots in the mouse, the gonadal depots are the largest and the most easily dissected, comprising about 30% of dissectible fat. Anatomical features: Obesity In an obese person, excess adipose tissue hanging downward from the abdomen is referred to as a panniculus. A panniculus complicates surgery of the morbidly obese individual. It may remain as a literal "apron of skin" if a severely obese person loses large amounts of fat (a common result of gastric bypass surgery). Obesity is treated through exercise, diet, and behavioral therapy. Reconstructive surgery is one aspect of treatment. Anatomical features: Visceral fat Visceral fat or abdominal fat (also known as organ fat or intra-abdominal fat) is located inside the abdominal cavity, packed between the organs (stomach, liver, intestines, kidneys, etc.). Visceral fat is different from subcutaneous fat underneath the skin, and intramuscular fat interspersed in skeletal muscles. Fat in the lower body, as in thighs and buttocks, is subcutaneous and is not consistently spaced tissue, whereas fat in the abdomen is mostly visceral and semi-fluid. Visceral fat is composed of several adipose depots, including mesenteric, epididymal white adipose tissue (EWAT), and perirenal depots. Visceral fat is often expressed in terms of its area in cm2 (VFA, visceral fat area).An excess of visceral fat is known as abdominal obesity, or "belly fat", in which the abdomen protrudes excessively. New developments such as the Body Volume Index (BVI) are specifically designed to measure abdominal volume and abdominal fat. Excess visceral fat is also linked to type 2 diabetes, insulin resistance, inflammatory diseases, and other obesity-related diseases. Likewise, the accumulation of neck fat (or cervical adipose tissue) has been shown to be associated with mortality. Several studies have suggested that visceral fat can be predicted from simple anthropometric measures, and predicts mortality more accurately than body mass index or waist circumference.Men are more likely to have fat stored in the abdomen due to sex hormone differences. Estrogen (female sex hormone) causes fat to be stored in the buttocks, thighs, and hips in women. When women reach menopause and the estrogen produced by the ovaries declines, fat migrates from the buttocks, hips and thighs to the waist; later fat is stored in the abdomen.Visceral fat can be caused by excess cortisol levels. At least 10 MET-hours per week of aerobic exercise leads to visceral fat reduction in those without metabolic-related disorders. Resistance training and caloric restriction also reduce visceral fat, although their effect may not be cumulative. Both exercise and hypocaloric diet cause loss of visceral fat, but exercise has a larger effect on visceral fat versus total fat. High-intensity exercise is one way to effectively reduce total abdominal fat. An energy restricted diet combined with exercise will reduce total body fat and the ratio of visceral adipose tissue to subcutaneous adipose tissue, suggesting a preferential mobilization for visceral fat over subcutaneous fat. Anatomical features: Epicardial fat Epicardial adipose tissue (EAT) is a particular form of visceral fat deposited around the heart and found to be a metabolically active organ that generates various bioactive molecules, which might significantly affect cardiac function. Marked component differences have been observed in comparing EAT with subcutaneous fat, suggesting a location-specific impact of stored fatty acids on adipocyte function and metabolism. Anatomical features: Subcutaneous fat Most of the remaining nonvisceral fat is found just below the skin in a region called the hypodermis. This subcutaneous fat is not related to many of the classic obesity-related pathologies, such as heart disease, cancer, and stroke, and some evidence even suggests it might be protective. The typically female (or gynecoid) pattern of body fat distribution around the hips, thighs, and buttocks is subcutaneous fat, and therefore poses less of a health risk compared to visceral fat.Like all other fat organs, subcutaneous fat is an active part of the endocrine system, secreting the hormones leptin and resistin.The relationship between the subcutaneous adipose layer and total body fat in a person is often modelled by using regression equations. The most popular of these equations was formed by Durnin and Wormersley, who rigorously tested many types of skinfold, and, as a result, created two formulae to calculate the body density of both men and women. These equations present an inverse correlation between skinfolds and body density—as the sum of skinfolds increases, the body density decreases.Factors such as sex, age, population size or other variables may make the equations invalid and unusable, and, as of 2012, Durnin and Wormersley's equations remain only estimates of a person's true level of fatness. New formulae are still being created. Anatomical features: Marrow fat Marrow fat, also known as marrow adipose tissue (MAT), is a poorly understood adipose depot that resides in the bone and is interspersed with hematopoietic cells as well as bony elements. The adipocytes in this depot are derived from mesenchymal stem cells (MSC) which can give rise to fat cells, bone cells as well as other cell types. The fact that MAT increases in the setting of calorie restriction/ anorexia is a feature that distinguishes this depot from other fat depots. Exercise regulates MAT, decreasing MAT quantity and diminishing the size of marrow adipocytes. The exercise regulation of marrow fat suggests that it bears some physiologic similarity to other white adipose depots. Moreover, increased MAT in obesity further suggests a similarity to white fat depots. Anatomical features: Ectopic fat Ectopic fat is the storage of triglycerides in tissues other than adipose tissue, that are supposed to contain only small amounts of fat, such as the liver, skeletal muscle, heart, and pancreas. This can interfere with cellular functions and hence organ function and is associated with insulin resistance in type-2 diabetes. It is stored in relatively high amounts around the organs of the abdominal cavity, but is not to be confused with visceral fat. Anatomical features: The specific cause for the accumulation of ectopic fat is unknown. The cause is likely a combination of genetic, environmental, and behavioral factors that are involved in excess energy intake and decreased physical activity. Substantial weight loss can reduce ectopic fat stores in all organs and this is associated with an improvement of the function of those organs.In the latter case, non-invasive weight loss interventions like diet or exercise can decrease ectopic fat (particularly in heart and liver) in overweight or obese children and adults. Physiology: Free fatty acids (FFAs) are liberated from lipoproteins by lipoprotein lipase (LPL) and enter the adipocyte, where they are reassembled into triglycerides by esterifying them onto glycerol. Human fat tissue contains about 87% lipids.There is a constant flux of FFAs entering and leaving adipose tissue. The net direction of this flux is controlled by insulin and leptin—if insulin is elevated, then there is a net inward flux of FFA, and only when insulin is low can FFA leave adipose tissue. Insulin secretion is stimulated by high blood sugar, which results from consuming carbohydrates.In humans, lipolysis (hydrolysis of triglycerides into free fatty acids) is controlled through the balanced control of lipolytic B-adrenergic receptors and a2A-adrenergic receptor-mediated antilipolysis. Physiology: Fat cells have an important physiological role in maintaining triglyceride and free fatty acid levels, as well as determining insulin resistance. Abdominal fat has a different metabolic profile—being more prone to induce insulin resistance. This explains to a large degree why central obesity is a marker of impaired glucose tolerance and is an independent risk factor for cardiovascular disease (even in the absence of diabetes mellitus and hypertension). Studies of female monkeys at Wake Forest University (2009) discovered that individuals with higher stress have higher levels of visceral fat in their bodies. This suggests a possible cause-and-effect link between the two, wherein stress promotes the accumulation of visceral fat, which in turn causes hormonal and metabolic changes that contribute to heart disease and other health problems.Recent advances in biotechnology have allowed for the harvesting of adult stem cells from adipose tissue, allowing stimulation of tissue regrowth using a patient's own cells. In addition, adipose-derived stem cells from both human and animals reportedly can be efficiently reprogrammed into induced pluripotent stem cells without the need for feeder cells. The use of a patient's own cells reduces the chance of tissue rejection and avoids ethical issues associated with the use of human embryonic stem cells. A growing body of evidence also suggests that different fat depots (i.e. abdominal, omental, pericardial) yield adipose-derived stem cells with different characteristics. These depot-dependent features include proliferation rate, immunophenotype, differentiation potential, gene expression, as well as sensitivity to hypoxic culture conditions. Oxygen levels seem to play an important role on the metabolism and in general the function of adipose-derived stem cells.Adipose tissue is a major peripheral source of aromatase in both males and females, contributing to the production of estradiol.Adipose derived hormones include: Adiponectin Resistin Plasminogen activator inhibitor-1 (PAI-1) TNFα IL-6 Leptin Estradiol (E2)Adipose tissues also secrete a type of cytokines (cell-to-cell signalling proteins) called adipokines (adipose cytokines), which play a role in obesity-associated complications. Perivascular adipose tissue releases adipokines such as adiponectin that affect the contractile function of the vessels that they surround. Physiology: Brown fat Brown fat or brown adipose tissue (BAT) is a specialized form of adipose tissue important for adaptive thermogenesis in humans and other mammals. BAT can generate heat by "uncoupling" the respiratory chain of oxidative phosphorylation within mitochondria through tissue-specific expression of uncoupling protein 1 (UCP1). BAT is primarily located around the neck and large blood vessels of the thorax, where it may effectively act in heat exchange. BAT is robustly activated upon cold exposure by the release of catecholamines from sympathetic nerves that results in UCP1 activation. Nearly half of the nerves present in adipose tissue are sensory neurons connected to the dorsal root ganglia.BAT activation may also occur in response to overfeeding. UCP1 activity is stimulated by long chain fatty acids that are produced subsequent to β-adrenergic receptor activation. UCP1 is proposed to function as a fatty acid proton symporter, although the exact mechanism has yet to be elucidated. In contrast, UCP1 is inhibited by ATP, ADP, and GTP.Attempts to simulate this process pharmacologically have so far been unsuccessful. Techniques to manipulate the differentiation of "brown fat" could become a mechanism for weight loss therapy in the future, encouraging the growth of tissue with this specialized metabolism without inducing it in other organs. A review on the eventual therapeutic targeting of brown fat to treat human obesity was published by Samuelson and Vidal-Puig in 2020.Until recently, brown adipose tissue in humans was thought to be primarily limited to infants, but new evidence has overturned that belief. Metabolically active tissue with temperature responses similar to brown adipose was first reported in the neck and trunk of some human adults in 2007, and the presence of brown adipose in human adults was later verified histologically in the same anatomical regions. Physiology: Beige fat and WAT browning Browning of WAT, also referred to as "beiging", occurs when adipocytes within WAT depots develop features of BAT. Beige adipocytes take on a multilocular appearance (containing several lipid droplets) and increase expression of uncoupling protein 1 (UCP1). In doing so, these normally energy-storing adipocytes become energy-releasing adipocytes. Physiology: The calorie-burning capacity of brown and beige fat has been extensively studied as research efforts focus on therapies targeted to treat obesity and diabetes. The drug 2,4-dinitrophenol, which also acts as a chemical uncoupler similarly to UCP1, was used for weight loss in the 1930s. However, it was quickly discontinued when excessive dosing led to adverse side effects including hyperthermia and death. β3 agonists, like CL316,243, have also been developed and tested in humans. However, the use of such drugs has proven largely unsuccessful due to several challenges, including varying species receptor specificity and poor oral bioavailability.Cold is a primary regulator of BAT processes and induces WAT browning. Browning in response to chronic cold exposure has been well documented and is a reversible process. A study in mice demonstrated that cold-induced browning can be completely reversed in 21 days, with measurable decreases in UCP1 seen within a 24-hour period. A study by Rosenwald et al. revealed that when the animals are re-exposed to a cold environment, the same adipocytes will adopt a beige phenotype, suggesting that beige adipocytes are retained.Transcriptional regulators, as well as a growing number of other factors, regulate the induction of beige fat. Four regulators of transcription are central to WAT browning and serve as targets for many of the molecules known to influence this process. These include peroxisome proliferator-activated receptor gamma (PPARγ), PRDM16, peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α), and Early B-Cell Factor-2 (EBF2).The list of molecules that influence browning has grown in direct proportion to the popularity of this topic and is constantly evolving as more knowledge is acquired. Among these molecules are irisin and fibroblast growth factor 21 (FGF21), which have been well-studied and are believed to be important regulators of browning. Irisin is secreted from muscle in response to exercise and has been shown to increase browning by acting on beige preadipocytes. FGF21, a hormone secreted mainly by the liver, has garnered a great deal of interest after being identified as a potent stimulator of glucose uptake and a browning regulator through its effects on PGC-1α. It is increased in BAT during cold exposure and is thought to aid in resistance to diet-induced obesity FGF21 may also be secreted in response to exercise and a low protein diet, although the latter has not been thoroughly investigated. Data from these studies suggest that environmental factors like diet and exercise may be important mediators of browning. In mice, it was found that beiging can occur through the production of methionine-enkephalin peptides by type 2 innate lymphoid cells in response to interleukin 33. Physiology: Genomics and bioinformatics tools to study browning Due to the complex nature of adipose tissue and a growing list of browning regulatory molecules, great potential exists for the use of bioinformatics tools to improve study within this field. Studies of WAT browning have greatly benefited from advances in these techniques, as beige fat is rapidly gaining popularity as a therapeutic target for the treatment of obesity and diabetes. Physiology: DNA microarray is a bioinformatics tool used to quantify expression levels of various genes simultaneously, and has been used extensively in the study of adipose tissue. One such study used microarray analysis in conjunction with Ingenuity IPA software to look at changes in WAT and BAT gene expression when mice were exposed to temperatures of 28 and 6 °C. The most significantly up- and downregulated genes were then identified and used for analysis of differentially expressed pathways. It was discovered that many of the pathways upregulated in WAT after cold exposure are also highly expressed in BAT, such as oxidative phosphorylation, fatty acid metabolism, and pyruvate metabolism. This suggests that some of the adipocytes switched to a beige phenotype at 6 °C. Mössenböck et al. also used microarray analysis to demonstrate that insulin deficiency inhibits the differentiation of beige adipocytes but does not disturb their capacity for browning. These two studies demonstrate the potential for the use of microarray in the study of WAT browning. Physiology: RNA sequencing (RNA-Seq) is a powerful computational tool that allows for the quantification of RNA expression for all genes within a sample. Incorporating RNA-Seq into browning studies is of great value, as it offers better specificity, sensitivity, and a more comprehensive overview of gene expression than other methods. RNA-Seq has been used in both human and mouse studies in an attempt characterize beige adipocytes according to their gene expression profiles and to identify potential therapeutic molecules that may induce the beige phenotype. One such study used RNA-Seq to compare gene expression profiles of WAT from wild-type (WT) mice and those overexpressing Early B-Cell Factor-2 (EBF2). WAT from the transgenic animals exhibited a brown fat gene program and had decreased WAT specific gene expression compared to the WT mice. Thus, EBF2 has been identified as a potential therapeutic molecule to induce beiging. Physiology: Chromatin immunoprecipitation with sequencing (ChIP-seq) is a method used to identify protein binding sites on DNA and assess histone modifications. This tool has enabled examination of epigenetic regulation of browning and helps elucidate the mechanisms by which protein-DNA interactions stimulate the differentiation of beige adipocytes. Studies observing the chromatin landscapes of beige adipocytes have found that adipogenesis of these cells results from the formation of cell specific chromatin landscapes, which regulate the transcriptional program and, ultimately, control differentiation. Using ChIP-seq in conjunction with other tools, recent studies have identified over 30 transcriptional and epigenetic factors that influence beige adipocyte development. Physiology: Genetics The thrifty gene hypothesis (also called the famine hypothesis) states that in some populations the body would be more efficient at retaining fat in times of plenty, thereby endowing greater resistance to starvation in times of food scarcity. This hypothesis, originally advanced in the context of glucose metabolism and insulin resistance, has been discredited by physical anthropologists, physiologists, and the original proponent of the idea himself with respect to that context, although according to its developer it remains "as viable as when [it was] first advanced" in other contexts.In 1995, Jeffrey Friedman, in his residency at the Rockefeller University, together with Rudolph Leibel, Douglas Coleman et al. discovered the protein leptin that the genetically obese mouse lacked. Leptin is produced in the white adipose tissue and signals to the hypothalamus. When leptin levels drop, the body interprets this as a loss of energy, and hunger increases. Mice lacking this protein eat until they are four times their normal size. Physiology: Leptin, however, plays a different role in diet-induced obesity in rodents and humans. Because adipocytes produce leptin, leptin levels are elevated in the obese. However, hunger remains, and—when leptin levels drop due to weight loss—hunger increases. The drop of leptin is better viewed as a starvation signal than the rise of leptin as a satiety signal. However, elevated leptin in obesity is known as leptin resistance. The changes that occur in the hypothalamus to result in leptin resistance in obesity are currently the focus of obesity research.Gene defects in the leptin gene (ob) are rare in human obesity. As of July 2010, only 14 individuals from five families have been identified worldwide who carry a mutated ob gene (one of which was the first ever identified cause of genetic obesity in humans)—two families of Pakistani origin living in the UK, one family living in Turkey, one in Egypt, and one in Austria—and two other families have been found that carry a mutated ob receptor. Others have been identified as genetically partially deficient in leptin, and, in these individuals, leptin levels on the low end of the normal range can predict obesity.Several mutations of genes involving the melanocortins (used in brain signaling associated with appetite) and their receptors have also been identified as causing obesity in a larger portion of the population than leptin mutations. Physiology: Physical properties Adipose tissue has a density of ~0.9 g/ml. Thus, a person with more adipose tissue will float more easily than a person of the same weight with more muscular tissue, since muscular tissue has a density of 1.06 g/ml. Body fat meter: A body fat meter is a tool used to measure the body fat to weight ratio in the human body. Different meters use various methods to determine the ratio. They tend to under-read body fat percentage. Body fat meter: In contrast with clinical tools, one relatively inexpensive type of body fat meter uses the principle of bioelectrical impedance analysis (BIA) in order to determine an individual's body fat percentage. To achieve this, the meter passes a small, harmless, electric current through the body and measures the resistance, then uses information on the person's weight, height, age, and sex to calculate an approximate value for the person's body fat percentage. The calculation measures the total volume of water in the body (lean tissue and muscle contain a higher percentage of water than fat), and estimates the percentage of fat based on this information. The result can fluctuate several percentage points depending on what has been eaten and how much water has been drunk before the analysis. Body fat meter: Before bioelectrical impedance analysis machines were developed, there were many different ways in analyzing body composition such as skin fold methods using calipers, underwater weighing, whole body air displacement plethysmography (ADP) and DXA. Animal studies: Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative Macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Task Force on Process Mining** Task Force on Process Mining: The IEEE Task Force on Process Mining (TFPM) is a non-commercial association for process mining. The IEEE (Institute of Electrical and Electronics Engineers) Task Force on Process Mining was established in October 2009 as part of the IEEE Computational Intelligence Society at the Eindhoven University of Technology.The task force is supported by over 80 organizations and has around 750 members. The main goal of the task force is to promote the research, development, education, and understanding of process mining. Activities and organization: The Task Force on Process Mining has a Steering Committee and an Advisory Board. The Steering Committee, chaired by Wil van der Aalst since its inception in 2009, defined 15 action lines. These include the organization of the annual International Process Mining Conference (ICPM) series, standardization efforts leading to the IEEE XES standard for storing and exchanging event data, and the Process Mining Manifesto which was translated into 16 languages. The Task Force on Process Mining also publishes a newsletter, provides data sets, organizes workshops and competitions, and connects researchers and practitioners. Activities and organization: In 2016, the IEEE Standards Association published the IEEE Standard for Extensible Event Stream (XES), which is a widely accepted file format by the process mining community. Supporting organizations: The Task Force on Process Mining is supported by most of the process mining vendors (e.g., Celonis, Fluxicon, UiPath, QPR, ABBYY, LANA, Logpicker, Minit, Myinvenio, PAFnow, Signavio and Software AG), consultancy firms (KPMG, Deloitte, etc.), universities (e.g., RWTH, TU/e, QUT, UniBZ, and DTU), research institutes (e.g., Fraunhofer FIT), and organizations using process mining at a large scale (e.g., ABB, Bosch, and Siemens). In total, over 80 organizations support the task force and there are around 750 individual members.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Euchre variants** Euchre variants: The card game of Euchre has many variants, including those for two, three, five or more players. The following is a selection of notable Euchre variants. Two-player variants: Two-player dummy A normal hand is dealt out to each player along with a 3-card dummy hand to each player. Each person picks up their dummy hand after trump has been called. Each player must make their best five card hand out of the eight cards available. Going alone is still an option and occurs when the calling player opts not to pick up the dummy hand. 12-card (or 11-card) In this version, there are no partners. Each player will end up with four hidden cards, keeping strategy very similar to the partnered-version. Two-player variants: A normal deck of 9-10-J-Q-K-A in all four suits is used. The dealer places a card face down in front of the other player, and then in front of the dealer, alternating until each player has a row of four face-down cards. The dealer then places a face-up card on top of each face-down card, so now each player has 8 cards. The dealer then deals four more cards to each player, which they pick up and hold in their hand. Two-player variants: The non-dealer looks at their 4 hand cards, 4 show cards, and the opponents 4 show cards, and bids the number of tricks they think they can take, with a minimum bid of 7. The dealer can bid higher or pass. The highest bidder sets the trump suit, and the non-dealer goes first. Players can play any card from their hand, or any of their face-up cards. If a face-up card is played that had covered a face-down card, the face-down card is flipped over and becomes eligible for play on the next trick. Two-player variants: It is strategically important to remember to keep cards in the hand, as otherwise it is very easy for the opponent to lead off-suit and win. It may thus be better in cases to sacrifice a higher-value face-up card than to give up hand cards. Similarly, if out of trump cards, it may be worthwhile to sacrifice a high-value face-up card in hopes of revealing a trump card underneath. Two-player variants: Points are only awarded or lost for the number of tricks bid: 1 point for 7 tricks, 2 points for 8 tricks, etc., up to 6 points for all 12 tricks. The player who bid gains the points if they succeed, and loses the points if they fail. The first player to get 10 points wins the game. In some variants, each player may be dealt a 3-card private hand with 4 sets of face up/face down cards, or a 5-card hand with 3 sets of face up/face down cards. Three-player variants: Missing Man Missing Man Euchre (a.k.a. George's Hand Euchre) is a three-handed Euchre tournament game of Western Wisconsin. It is also played on the gulf coast of Florida. It plays similarly to traditional four-handed Euchre. Three-player variants: Four 5-card Euchre hands are dealt with the fourth hand being a dummy (sometimes called George's Hand), and the top card of the remaining cards is upturned. The trump suit is called in the normal fashion, in two rounds of bidding. However, either of the players who does not call trump may exchange their hand for the dummy. If no trump is called in the first two rounds of bidding, the dealer must either call a suit (other than that of the top card) or pick up the dummy hand and call any suit (including that of the top card). Three-player variants: Play proceeds as normal. The calling player scores 1 point for 3 or 4 tricks, 2 points for all 5 tricks, and 4 points for a called loner. If the calling player fails to win the bid, the other players score 1 point each. Since the caller has no partner, leading trump is a good strategy, as when going alone in regular Euchre. Three-player variants: Three-hand dummy Another common three-player variation is played by dealing out four hands, but with the fourth hand acting as a dummy hand (a.k.a. the dead hand, imaginary friend, George, Johann, etc.). After calling trump, the calling player picks up the dummy hand and makes the best five-card hand for themselves out of all ten cards. Alternatively, the caller may elect to "go alone" by not picking up the dummy hand. The caller then plays alone against the other two players, who play as partners for that hand. The calling player scores 1 point for winning the hand, 2 points for all five tricks, or 4 points for taking all five tricks while going alone. Three-player variants: Variations may limit the size or utility of the dummy hand, because making the best hand from ten cards may be viewed as too advantageous. Examples include a three-card dummy or the calling player randomly choosing three cards from the dummy, then making the best hand out of eight cards. Three-player variants: Sneaky Steve is a variant in which the dummy hand is called "Steve". After the trump is called and the 5-card dummy hand is used or discarded, the player with the 9 of diamonds (called "Sneaky Steve") may exchange it for a random card of the 3 bottom cards of the kitty (i.e.: not including the top card or a card exchanged for it). Play then proceeds normally. Three-player variants: Cut-throat Euchre In Cut-throat Euchre, the dealing and bidding process is as normal. The caller scores 1 point for 3 or 4 tricks, 3 points for 5 tricks, and the defenders each get 2 points for 3 or more tricks. Strategy is similar to going alone in standard Euchre. Defenders should pay attention and keep high-value non-trump cards of the suit that their partner is not playing; this will increase the chances of taking the last trick, which is almost always a non-trump trick. Variants: No 9s – Played with the nines removed from the deck, which inflates the value of all hands, requiring more care in bidding. Three-player variants: Count down – With the 9s removed from the deck, players start at 300 or 500 points and play down. Points are inflated: the caller loses 10 points for 3 or 4 tricks, 50 points for all 5 tricks, and the defenders each lose 20 points for 3 or more tricks. The first player to lose all points wins the game. Three-player variants: Canadian In western New York, a three-player variation called "Canadian" is played (it is called Gyoza in Chicago and Buck Euchre in areas of the US Midwest). Four hands are dealt normally. The top kitty card is upturned and automatically becomes trump. Starting from the dealer's left, the players then have the option of exchanging their hand for the dummy hand, with their discarded hand then becoming the dummy hand available for exchange by the next player. After the dealer chooses to keep or exchange hands, the dealer then picks up the trump card and play begins. Three-player variants: Shooter In Southern Ontario, a three-person version exists called "Shooter". Three eight-card hands are dealt. Players then bid to call trump, with a minimum bid of 3 tricks. The winner of the contract calls the trump suit or may call "no trump", where aces are high and all jacks are treated as off-trump coloured jacks (i.e. beat a ten but lose to a queen). Players score 1 point for each trick won. If the caller fails to achieve their contract, they lose that number of points. A player who bid all 8 tricks, called "shooter", receives an extra 4 points if successful. The game is played to 31 points. Three-player variants: Ghost player Four 5-card hands are dealt, with the extra hand going to the "ghost player". In clockwise order, players may opt to switch their hand with the ghost player's hand. The top card of the kitty is then upturned and bidding and play proceed as normal. If the caller played their dealt hand, they get 2 points for 3 or 4 tricks and 4 points for 5 tricks; but if the caller switched hands, they receive only 1 point for winning the majority of tricks. The defenders each receive 1 point each for 3 or 4 tricks, and 2 points each for all 5 tricks. Three-player variants: Threechre In Threechre (sometimes pronounced "tree-ker" or "three-kree"), only three suits are used and a joker serves as the left bower, regardless of drawn suit. The dealer may opt to go alone in the first round of bidding, in which case the top card is discarded and the remaining 4 cards of the kitty are given to the dealer's opponents, who pick their best 5-card hand from the 7 cards. In scoring, 3 tricks is a clear win for 2 points, while a 2–2 tie awards each tied player 1 point. There is a −1 penalty for the caller, so that calling and losing results in −1 points, calling and tying is 0 points, and calling and winning is 1 point. Three-player variants: Call-partner Call-partner is a variation for 3 to 10 players (using a deck adjusted so that the kitty will have 5 cards or less after dealing a 5-card hand to each player). The top card of the kitty is upturned and bidding proceeds as normal. The player who calls trump may call for a partner by naming a desired card. For example, if the caller names the left bower, the player with that card becomes their partner – but this is not revealed until the left bower is played. This creates an element of uncertainty as to who the partner is or whether the named card might be in the kitty. The caller may also opt to go alone. Scoring as normal. Three-player variants: Euchress Euchress discards the 9s from the deck. Dealing and bidding are as normal, but the caller can choose a partner or go alone. Scoring is the same, with play typically going to 15 points. Four-player variations: Benny variants A common variant played in southwestern England pub leagues uses the standard Euchre deck with an extra card, usually a Joker or 2 of spades, called the "Benny" (or the "Bird" in Australia). This card is the highest trump no matter what suit is called. When the Benny is turned over by the dealer, the dealer must choose a suit to call as trumps before looking at his or her hand. Bidding and play then proceeds normally. Four-player variations: The Duchy of Cornwall lays claim to the origin of the Benny in Euchre, its usage being exported by emigrant Cornish miners in the eighteenth and nineteenth centuries. Four-player variations: Railroad Euchre There is an extension of this style wherein the 9s are removed from the deck and up to four "Bennys" are added (usually one or both jokers and/or one or both deuces). In its simplest form, with a single Benny, it is the same as the English variant (above). The Bennys are ranked trump ahead of the right bower, regardless of the suit of trump, with deuce(s) outranking jokers. If two Jokers are added, some method is achieved for establishing a "high" and a "low" Joker. Four-player variations: Although somewhat complicated, the addition of up to four higher-ranking trump cards makes a significant strategic impact on gameplay. The two and three Benny versions are the most common. Four-player variations: 33-card deck In Guernsey (Channel Islands) the game is played with a 33 card deck incorporating 7 to Ace plus a joker as Benny. In addition, where the Benny is turned up, the dealer not only has to name the suit, he must then pick it up and play (although he may still choose whether to play alone or with his partner). Unofficial rules require the wearing of a "dealing hat" when dealing (usually a Fez) alternatively a 'dealing duck' may be placed in front of the dealer and referring to the Ace of Spades as the Death Card, regardless of trump. Tradition dictates that the Death Card should not be led on the first trick unless defending against a lone attacker as it will otherwise invariably be trumped. A cleverer lead is known as the "Brisey" which involves leading the left bower in an attempt to trick one of your opponents into a renege (a failure to correctly follow suit)if any particular player consistently reneges throughout an evenings play he / she is referred to as a 'habin'. The Brisey lead itself is named after Brian Mauger, a famous Guernsey Euchre player. If a defender has won two tricks and still has possession of the Benny then he must slap it onto his forehead as a sign of the guaranteed euchre. In an attempt to improve a poor hand a player may call a 'kezza' with what would appear to be little chance of success in the hope that his partner may assist in winning the majority of the available tricks. Four-player variations: Haus or Hoss Haus or Hoss is a variant popular in parts of upstate New York, specifically the Pennsylvania Dutch area. Two standard decks are used with the numeral cards removed to leave only the Aces and court cards. Players form two teams of two, sitting opposite one another. Deal and play are clockwise, 8 cards being dealt to each player. Beginning with eldest hand, there is one round of bidding in which players bid the number of tricks they believe their team can make, the minimum bid being four. Players must outbid any earlier bid or pass. The highest bidder wins the auction and leads to the first trick, usually with the top trump (the "right bar" or "right bauer"). The suit of the led card is the "called suit" and indicates trumps. The highest cards are the trump Jacks, followed by the Jacks of the same colour, then A K and Q. In the side suits the order is A K Q J. Players must follow suit if able; otherwise may play any card. A player who wants to play alone with the aim of taking all 8 tricks bids "Haus"; this wins the auction and the player may swap two cards with his or her partner before play begins. Five players: One five-player variant expands the standard deck, adding the 8s and a pair of 2s (or alternatively the jokers). The 2s or jokers are the highest trump cards, whichever suit is called. Five 5-card hands are dealt and the top card of the kitty is upturned. If no other player orders a trump suit, the dealer must do so. In the special case that the top card is a 2, the dealer cannot look at their cards for the first round of bidding, in which any suit can be called. Once trump is called, the caller may select a partner by naming a card (other than a 2) or may alternatively go alone. A player holding the named card becomes the caller's partner, though this is not revealed until the named card is played. Scoring is normal. There are no added benefits if the caller wins all 5 tricks when not going alone but the named card is in the kitty (i.e.: going alone must be intentional). The game ends when a player reaches ten or more points while holding at least a one-point advantage over all other players, with various rules for breaking ties. Six-player variants: Uneven Teams The above rules for five-handed euchre can also be used for 6 hands by adding the 7s. The team that makes trump will usually play 2 against 4. Six-player variants: Triple Wild Deck Partnerships are two teams of three players. The deck consists of 8s through aces, with the addition of the 4, 3, and 2 of spades which are (in order) the highest trumps. After dealing, there is a single card left over which serves as the top card for bidding. The game is played to 15 points, scoring 3 for euchres and sweeps, and 5 for lone calls. Six-player variants: 32-card deck Partnerships are three teams of two players seated across from each other. The deck is 32 cards, 7 through ace of each suit. The kitty consists of only two cards. If defenders euchre the callers, both defending teams score 2 points. The game is won when the first team scores a specific number of points (usually 10), sometimes also requiring a certain lead over either opposing partnership to avoid ties. Six-player variants: 34-card deck Partnerships are two teams of three players. The deck consists of 7s through aces plus two jokers, which are the highest and second-highest trumps. Six-player variants: Six-suit deck Partnerships are three teams of two players, seated opposite. Uses a deck constructed of three red suits and three black suits, such that there is one right bower and two left bowers (the first left bower played outranks the second). Bidding, play, and scoring is as normal, with additional rules for ties: If the calling team and another team each win two tricks, each of these partnerships score 1 point. If the two non-calling teams each win two or three tricks, each of these partnerships score 2 points. The first team to 10 points wins. Six-player variants: Double deck Partnerships are two teams of three players. Uses two standard euchre decks (48 cards); if the same card is played twice in a trick, the card which is played first is highest. Each player is dealt 8 cards. Players bid how many tricks they can get (minimum 3), with the winner calling trump. The caller may go alone normally as a "Big Shooter" or optionally as a "Little Shooter" by receiving the best card from each teammate. Six-player variants: Teams score 1 point for each trick won. If a team fails to make its bid, they don't receive any points and additionally lose the same number of points as their bid. Winning all 8 tricks scores an additional 4 points on a Little Shooter and 16 points on a Big Shooter. The game is won when a team has 32 points or more at the end of a turn when they called trump, or 34 points otherwise.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heuristic (engineering)** Heuristic (engineering): In engineering, heuristics are experience-based methods used to reduce the need for calculations pertaining to equipment size, performance, or operating conditions. Heuristics are fallible and do not guarantee a correct solution. It is important to understand their limitations when applying them to different equipment and processes. Though heuristics are limited, they may be of value. This is because they offer time-saving approximations in preliminary process design. Heuristic (engineering): Problem solving methods are intrinsic to forensic engineering methods, where failures are analysed for the root cause or causes. Only when failures have been investigated with conclusive results can remedial action be taken with confidence. Examples: Storage Vessels These heuristics were taken from Turton's "Analysis, Synthesis, and Design of Chemical Processes". Use vertical tanks on legs when the tank is less than 3.8 m3. Use horizontal tanks on concrete supports when the tank is between 3.8 and 38 m3, Use vertical tanks on concrete pads when the tank is beyond 38 m3, Liquids subject to breathing losses may be stored in tanks with floating or expansion roofs for conservation. Freeboard is 15% below 1.9 m3 and 10% above 1.9 m3. Thirty-day capacity often is specified for raw materials and products, but depends on connecting transportation equipment schedules. Pumps These heuristics were taken from Turton's "Analysis, Synthesis, and Design of Chemical Processes". Centrifugal pumps: Single Stage: for of 0.057-18.9 m3/min, 152 m maximum head. Multistage: for 0.076-41.6 m3/min, 1675 m maximum head. Efficiency is 45% at 0.378 m3/min, 70% at 1.89 m3/min, 80% at 37.8 m3/min. Axial pumps: for 0.076–378 m3/min, 12 m head, 65-85% efficiency. Rotary pumps: for 0.00378-18.9 m3/min, 15,200 m head, 50-80% efficiency. Reciprocating pumps: for 0.0378-37.8 m3/min, 300 km head maximum. Efficiency is 70% at 7.46 kW, 85% at 37.3 kW, and 90% at 373 kW.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Burger Rings** Burger Rings: Burger Rings are a type of corn-based, burger-flavoured Australian snack food distributed by The Smith's Snackfood Company, which, in turn is owned by PepsiCo. History: Burger Rings were introduced in 1974.During the late 1990s the Burger Rings brand went through a brand overhaul, coinciding with the acquisition of The Smith's Snackfood Company by Lays. During the brand overhaul the appearance of the packet was changed to a more modernised look with bolder and sharper letters in the logo, adopting its past logo. Ingredients: Burger Rings are made out of a combination of corn and rice. A Smith's Chips representative confirmed Burger Rings are suitable for vegans.The ingredients for Burger Rings are as follows: cereals (corn, rice), vegetable oil, maltodextrin, rice bran, salt, sugar, hydrolysed vegetable protein (soy), flavour enhancer (621), food acids (sodium diacetate, citric acid), flavour, mineral salt (potassium chloride), yeast extracts, onion powder, tomato powder. It is also stated on the packaging "Contains Gluten", "Contains Milk or Milk Products", "Contains Soy Bean or Soy Bean Products" in contrast with majority of other packaging that states "may contain traces of..." which is confusing for vegans as it implies one or more of the ingredients are derived from Milk. Flavours: A Bacon Flavour variant was offered in Australia, briefly. Marketing: A memorable Star Wars-themed advertisement for the product was aired on Australian television in the early 1980s. It featured a faux Luke Skywalker character on Tatooine. After exiting his Landspeeder, he is confronted by a large group of Jawas who ask for his Burger Rings. He begrudgingly shares them only to be left with a single Burger Ring. A Jawa swiftly grabs that last one and the ad ends. Marketing: A radio ad campaign in the 1980s joked that Burger Rings were possibly made of rubber tyres concluding with the slogan "they taste good but!". Marketing: A 1989 ad aired on Australian television depicting a school chemistry experiment resulting in the creation of a single Burger Ring snack. The student who performed the experiment consumes the snack and seems to gain superpowers, developing jagged hair and a crazed look as the now-fluorescent Burger Ring bounces inside the boy's ribcage, made visible by a radiographic effect akin to X-ray imaging. This later turns out to be a daydream of the boy who has fallen asleep in a chemistry class, and continues to mix his chemicals in a sleepy haze.A 1992 ad featured a man at a bus stop who attempts to steal one of the snacks from another man's packet, only having it growl like a dog and attack his arm, making him run away past a sign that says "WARNING - BURGER RINGS BITE". The owner then shares the packet with a woman on his other side. In popular culture: In 2014, a contestant on Australian quiz show Millionaire Hot Seat failed to identify "Burger ring" as the "gag answer" to the $100 question, "Which of these is not a piece of jewellery commonly worn to symbolise a relationship between two people?". The contestant instead incorrectly locked in "Anniversary ring". The contestant was invited back onto the set at the end of the program where host Eddie McGuire presented her with a packet of Burger Rings as a consolation prize.In the 2016 comedy-drama film Hunt for the Wilderpeople, in a cameo appearance by the film's writer-director Taika Waititi, the character 'Minister' mentions Burger Rings twice in a mangled parable about Heaven: first as one of "the nummiest treats you can imagine" along with other snack food and beverage items such as Fanta, Doritos, Lemon & Paeroa and Coca-Cola Zero Sugar, and then as a designation of a door. International variants: Burger Rings are available in New Zealand under the same name, except distributed by Bluebird Foods. The New Zealand variant has a different packaging design and a similar slogan: "Full on burger flavour". They are available in 30g and 120g bags, and in 108g 6-pack multipacks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TRIM63** TRIM63: E3 ubiquitin-protein ligase TRIM63, also known as "MuRF1" (Muscle Ring-Finger Protein-1), is an enzyme that in humans is encoded by the TRIM63 gene.This gene encodes a member of the RING zinc finger protein family found in striated muscle and iris. The product of this gene is localized to the Z-line and M-line lattices of myofibrils, where titin's N-terminal and C-terminal regions respectively bind to the sarcomere. In vitro binding studies have shown that this protein also binds directly to titin near the region of titin containing kinase activity. Another member of this protein family binds to microtubules. Since these family members can form heterodimers, this suggests that these proteins may serve as a link between titin kinase and microtubule-dependent signal pathways in muscle.The protein encoded by the Trim63 gene is also called MuRF1. MuRF1 is the name most commonly used in the literature, and it stands for "Muscle RING Finger 1." Structurally, there are two closely related MuRFs, MuRF2 and MuRF3. These also have TRIM codes: MuRF2 is TRIM55; MuRF3 is TRIM54. Interactions: Trim63/MuRF1 has been shown to be an E3 ubiquitin ligase. Its major substrate is Myosin Heavy Chain (MHC, or Myosin-2, or MYH2), meaning it induces the proteasome-mediated degradatin of MHC, by causing MHC to be ubiquitinated. MuRF1 is upregulated during skeletal muscle atrophy – and thus the degradation of myosin heavy chain, which is a major component of the sarcomere, is an important mechanism in the breakdown of skeletal muscle under atrophy conditions MuRF1 has been shown to be upregulated during denervation, administration of glucocorticoids, immobilization, and casting (when a cast is applied to a limb, in order to immobilize it). All of these settings cause skeletal muscle atrophy. Interactions: TRIM63/MuRF1 has been shown to interact with Titin, GMEB1 and SUMO2. Regulation during skeletal muscle atrophy: During settings of skeletal muscle atrophy, the levels of Trim63/MuRF1 mRNA increase. , leading to breakdown of the sarcomere. This was found to be due to regulation of gene expression of Trim63/MuRF1 by the FOXO (or Forkhead) family of transcription factors. ; see also FOX proteins. Foxo1 or Foxo3 may regulate MuRF1. These factors are normally kept out of the nucleus by phosphorylation induced by a kinase called Akt. When Akt is inactivated, or less active, Foxo1 or Foxo3 can then transport to the nucleus, and induce expression of MuRF1. Clinical significance: Recently, it has been suggested that TRIM63/MuRF1 is associated with an autosomal-recessive form of hypertrophic cardiomyopathy (HCM). In this paper, the authors describe that individuals harboring homozygous or compound heterozygous rare variants in TRIM63/MuRF1 show a peculiar HCM phenotype, characterized by concentric left ventricular (LV) hypertrophy (50% of patients) and a high rate of LV dysfunction (20%). This finding suggests that Myosin Heavy Chain levels may be dysregulated in the heart in the absence of MuRF1, leading to pathology. Clinical significance: Upregulation of MuRF1/Trim63 mRNA is regularly used as an indicator that active skeletal muscle atrophy is occurring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sound-in-Syncs** Sound-in-Syncs: Sound-in-Syncs is a method of multiplexing sound and video signals into a channel designed to carry video, in which data representing the sound is inserted into the line synchronising pulse of an analogue television waveform. This is used on point-to-point links within broadcasting networks, including studio/transmitter links (STL). It is not used for broadcasts to the public. History: The technique was first developed by the BBC in the late 1960s. In 1966, The corporation's Research Department made a feasibility study of the use of pulse-code modulation (PCM) for transmitting television sound during the synchronising period of the video signal. This had several advantages: it removed the necessity for a separate sound link, reduced the possibility of operational errors and offered improved sound quality and reliability. History: Awards Sound-in-Syncs and its R&D engineers have won several awards, including: The Royal Television Society's Geoffrey Parr Award in 1972 A Queen's Award for Enterprise in 1974 In 1999, a Technology & Engineering Emmy Award Versions: Original mono S-i-S In the original system, as applied to 625 line analogue TV, the audio signal was sampled twice during each television line and each sample converted to 10-bit PCM. Two such samples were inserted into the next line synchronising pulse. At the destination, the audio samples were converted back to analogue form and the video waveform restored to normal. Compandors operating on the signal before encoding and after decoding enabled the required signal-to-noise ratio to be achieved. As the PCM noise was predominantly high-pitched, the compandor only needed to operate on the high frequencies. Also, the compandor only operated at high audio levels, so that modulation of the noise by the companding would be masked by the relatively loud high-frequency audio components. A pilot tone at half the sampling frequency was transmitted to enable the expander to track the gain adjustment applied by the compressor, even when the latter was limiting.Following successful trials with the BBC, in 1971 Pye TVT started to make and sell the S-i-S equipment under licence. The largest quantities went to the BBC itself, to the EBU and to Canada. Smaller numbers went to other countries including South Africa, Australia and Japan. Versions: Ruggedised S-i-S A ruggedised version of the system was developed, which provided about 7 kHz audio bandwidth, for use over noisy or difficult microwave paths, such as those often encountered for outside broadcasts. Stereo S-i-S Later systems, developed in the 1980s, used 14-bit linear PCM samples, digitally companded into 10-bit samples by means of NICAM-3 lossy compression. These were capable of carrying two audio channels and were known as stereo Sound-in-Syncs. ITV S-i-S The ITV network used coders and encoders produced by RE of Denmark. The two variations of Sound-in-Syncs used by the BBC and ITV were not compatible. The terms DCSIS or DSIS was commonly used in ITV to describe dual channel Sound-in-Syncs. Very often material carried was not stereo, but dual mono.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Profadol** Profadol: Profadol (CI-572) is an opioid analgesic which was developed in the 1960s by Parke-Davis. It acts as a mixed agonist-antagonist of the μ-opioid receptor. The analgetic potency is about the same as of pethidine (meperidine), the antagonistic effect is 1/50 of nalorphine. Synthesis: The Knoevenagel condensation between 3'-Methoxybutyrophenone [21550-06-1] and Ethyl cyanoacetate gives (1). Conjugate addition of cyanide gives (2). Hydrolysis of both nitrile groups, saponification of the ester and decarboxylation gives the diacid, CID:164137621 (3). Imide formation occurs upon treatment with methylamine giving 3-(3-Methoxyphenyl)-1-methyl-3-propylpyrrolidine-2,5-dione, CID:163444474 (4). Reduction of the imide by lithium aluminium hydride gave [1505-32-4][29369-01-5] (5). Demethylation completed the synthesis of Profadol (6).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Local Management Interface** Local Management Interface: Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet. Frame Relay: LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI. There are three standards for LMI: Using DLCI 0:ANSI's T1.617 Annex D standard ITU-T's Q.933 Annex A standard Using DLCI 1023: The "Gang of Four" standard, developed by Cisco, DEC, StrataCom and Nortel Carrier Ethernet: Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mukaiyama Taxol total synthesis** Mukaiyama Taxol total synthesis: The Mukaiyama taxol total synthesis published by the group of Teruaki Mukaiyama of the Tokyo University of Science between 1997 and 1999 was the 6th successful taxol total synthesis. The total synthesis of Taxol is considered a hallmark in organic synthesis. Mukaiyama Taxol total synthesis: This version is a linear synthesis with ring formation taking place in the order C, B, A, D. Contrary to the other published methods, the tail synthesis is by an original design. Teruaki Mukaiyama is an expert on aldol reactions and not surprisingly his Taxol version contains no less than 5 of these reactions. Other key reactions encountered in this synthesis are a pinacol coupling and a Reformatskii reaction. In terms of raw materials the C20 framework is built up from L-serine (C3), isobutyric acid (C4), glycolic acid (C2), methyl bromide (C1), methyl iodide (C1), 2,3-dibromopropene (C3), acetic acid (C2) and homoallyl bromide (C4). Synthesis C ring: The lower rim of the cyclooctane B ring containing the first 5 carbon atoms was synthesized in a semisynthesis starting from naturally occurring L-serine (scheme 1). This route started with conversion of the amino group of the serine methyl ester (1) to the diol ester 2 via diazotization (sodium nitrite/sulfuric acid). After protection of the primary alcohol group to a (t-butyldimethyl) TBS silyl ether (TBSCl / imidazole) and that of the secondary alcohol group with a (Bn) benzyl ether (benzyl imidate, triflic acid), the aldehyde 3 was reacted with the methyl ester of isobutyric acid (4) in an Aldol addition to alcohol 5 with 65% stereoselectivity. This group was protected as a PMB (p-methoxybenzyl) ether (again through an imidate) in 6 which enabled organic reduction of the ester to the aldehyde in 7 with DIBAL. Synthesis C ring: Completing the cyclooctane ring required 3 more carbon atoms that were supplied by a C2 fragment in an aldol addition and a Grignard C1 fragment (scheme 2). A Mukaiyama aldol addition (magnesium bromide / toluene) took place between aldehyde 7 and ketene silyl acetal 8 with 71% stereoselectivity to alcohol 9 which was protected as the TBS ether 10 (TBSOTf, 2,6-lutidine). The ester group was reduced with DIBAL to an alcohol and then back oxidized to aldehyde 11 by Swern oxidation. Alkylation by methyl magnesium bromide to alcohol 12 and another Swern oxidation gave ketone 13. This group was converted to the silyl enol ether 14 (LHMDS, TMSCl) enabling it to react with NBS to alkyl bromide 15. The C20 methyl group was introduced as methyl iodide in a nucleophilic substitution with a strong base (LHMDS in HMPA) to bromide 16. Then in preparation to ring-closure the TBS ether was deprotected (HCl/THF) to an alcohol which was converted to the aldehyde 17 in a Swern oxidation. The ring-closing reaction was a Reformatskii reaction with Samarium(II) iodide and acetic acid to acetate 18. The stereochemistry of this particular step was of no consequence because the acetate group is dehydrated to the alkene 19 with DBU in benzene. Synthesis B ring: The C5 fragment 24 required for the synthesis of the C ring (scheme 3) was prepared from 2,3-dibromopropene (20) by reaction with ethyl acetate (21), n-butyllithium and a copper salt, followed by organic reduction of acetate 22 to alcohol 23 (lithium aluminium hydride) and its TES silylation. Michael addition of 24 with the cyclooctane 19 to 25 with t-BuLi was catalyzed by copper cyanide. After removal of the TES group (HCl, THF), the alcohol 26 was oxidized to aldehyde 27 (TPAP, NMO)which enabled the intramolecular Aldol reaction to bicycle 28. Synthesis A ring: Ring A synthesis (scheme 4) started with reduction of the C9 ketone group in 28 to diol 29 with alane in toluene followed by diol protection in 30 as a dimethyl carbonate. This allowed selective oxidation of the C1 alcohol with DDQ after deprotection to ketone 31. This compound was alkylated to 32 at the C1 ketone group with the Grignard homoallyl magnesium bromide (C4 fragment completing the carbon framework) and deprotected at C11 (TBAF) to diol 33. By reaction with cyclohexylmethylsilyldichloride both alcohol groups participated in a cyclic silyl ether (34) which was again cleaved by reaction with methyl lithium exposing the C11 alcohol in 35. The A ring closure required two ketone groups for a pinacol coupling which were realized by oxidation of the C11 alcohol (TPAP, NMO) to ketone 36 and Wacker oxidation of the allyl group to diketone 37. After formation of the pinacol product 38 the benzyl groups (sodium, ammonia) and the trialkylsilyl groups (TBAF) were removed to form pentaol 39. Synthesis A ring: The pentaol 39 was protected twice: two bottom hydroxyl groups as a carbonate ester (bis(trichloromethyl)carbonate, pyridine) and the C10 hydroxyl group as the acetate forming 40. The acetonide group was removed (HCl, THF), the C7 hydroxyl group protected as a TES silyl ether and the C11 OH group oxidized (TPAP, NMO) to ketone 41. The ring A diol group was next removed in a combined elimination reaction and Barton deoxygenation with 1,1'-thiocarbonyldiimidazole forming alkene 42. Finally the C15 hydroxyl group was introduced by oxidation at the allyl position with in two steps PPC and sodium acetate (to the enone) and with K-selectride to alcohol 43 which was protected as a TES ether in 44. Synthesis D ring: The synthesis of the D ring (scheme 6) started from 44 with allylic bromination with copper(I) bromide and benzoyl tert-butyl peroxide to bromide 45. By adding even more bromide, another bromide 46 formed (both compounds are in chemical equilibrium) with the bromine atom in an axial position. Osmium tetroxide added two hydroxyl groups to the exocyclic double bond in diol 47 and oxetane ring-closure to 48 took place with DBU in a nucleophilic substitution. Then, acylation of the C4 hydroxyl group (acetic anhydride, DMAP, pyridine) resulted in acetate 49. In the final steps phenyllithium opened the ester group to form hydroxy carbonate 50, both TES groups were removed (HF, pyr) to triol 51 (baccatin III) and the C7 hydroxyl group was back-protected to 52. Tail synthesis: The amide tail synthesis (scheme 7) was based on an asymmetric Aldol reaction. The starting compound is the commercially available Benzyloxyacetic acid 53 which was converted to the thio ester 55 (Ethanethiol) through the acid chloride 54 (thionyl chloride, pyridine). This formed the silyl enol ether 55 (n-butyllithium, trimethylsilyl chloride, Diisopropylamine) which reacted with chiral amine catalyst 58, tin triflate and nBu2(OAc)2 in a Mukaiyama aldol addition with benzaldehyde to alcohol 59 with 99% anti selectivity and 96% ee. The next step converting the alcohol group to an amine in 60 was a Mitsunobu reaction (hydrogen azide, diethyl azodicarboxylate, triphenylphosphine with azide reduction to amine by Ph3P). The amine group was benzoylated with benzoyl chloride (61) and hydrolysis removes the thioether group in 62. Tail addition: In the final synthetic steps (scheme 8) the amide tail 62 was added to ABCD ring 52 in an esterification catalysed by o,o'-di(2-pyridyl) thiocarbonate (DPTC) and DMAP forming ester 63. The Bn protecting group was removed by hydrogenation using palladium hydroxide on carbon (64) and finally the TES group was removed by HF and pyridine to yield Taxol 65.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magical thinking** Magical thinking: Magical thinking, or superstitious thinking, is the belief that unrelated events are causally connected despite the absence of any plausible causal link between them, particularly as a result of supernatural effects. Examples include the idea that personal thoughts can influence the external world without acting on them, or that objects must be causally connected if they resemble each other or have come into contact with each other in the past. Magical thinking is a type of fallacious thinking and is a common source of invalid causal inferences. Unlike the confusion of correlation with causation, magical thinking does not require the events to be correlated.The precise definition of magical thinking may vary subtly when used by different theorists or among different fields of study. In anthropology, the posited causality is between religious ritual, prayer, sacrifice, or the observance of a taboo, and an expected benefit or recompense. Magical thinking: In psychology, magical thinking is the belief that one's thoughts by themselves can bring about effects in the world or that thinking something corresponds with doing it. These beliefs can cause a person to experience an irrational fear of performing certain acts or having certain thoughts because of an assumed correlation between doing so and threatening calamities.In psychiatry, magical thinking defines false beliefs about the capability of thoughts, actions or words to cause or prevent undesirable events. It is a commonly observed symptom in thought disorder, schizotypal personality disorder and obsessive-compulsive disorder. Types: Direct effect Bronisław Malinowski's Magic, Science and Religion (1954) discusses another type of magical thinking, in which words and sounds are thought to have the ability to directly affect the world. This type of wish fulfillment thinking can result in the avoidance of talking about certain subjects ("speak of the devil and he'll appear"), the use of euphemisms instead of certain words, or the belief that to know the "true name" of something gives one power over it, or that certain chants, prayers, or mystical phrases will bring about physical changes in the world. More generally, it is magical thinking to take a symbol to be its referent or an analogy to represent an identity. Types: Sigmund Freud believed that magical thinking was produced by cognitive developmental factors. He described practitioners of magic as projecting their mental states onto the world around them, similar to a common phase in child development. From toddlerhood to early school age, children will often link the outside world with their internal consciousness, e.g. "It is raining because I am sad." Symbolic approaches Another theory of magical thinking is the symbolic approach. Leading thinkers of this category, including Stanley J. Tambiah, believe that magic is meant to be expressive, rather than instrumental. As opposed to the direct, mimetic thinking of Frazer, Tambiah asserts that magic utilizes abstract analogies to express a desired state, along the lines of metonymy or metaphor.An important question raised by this interpretation is how mere symbols could exert material effects. One possible answer lies in John L. Austin's concept of performativity, in which the act of saying something makes it true, such as in an inaugural or marital rite. Other theories propose that magic is effective because symbols are able to affect internal psycho-physical states. They claim that the act of expressing a certain anxiety or desire can be reparative in itself. Causes: According to theories of anxiety relief and control, people turn to magical beliefs when there exists a sense of uncertainty and potential danger, and with little access to logical or scientific responses to such danger. Magic is used to restore a sense of control over circumstance. In support of this theory, research indicates that superstitious behavior is invoked more often in high stress situations, especially by people with a greater desire for control.Another potential reason for the persistence of magic rituals is that the rituals prompt their own use by creating a feeling of insecurity and then proposing themselves as precautions. Boyer and Liénard propose that in obsessive-compulsive rituals — a possible clinical model for certain forms of magical thinking — focus shifts to the lowest level of gestures, resulting in goal demotion. For example, an obsessive-compulsive cleaning ritual may overemphasize the order, direction, and number of wipes used to clean the surface. The goal becomes less important than the actions used to achieve the goal, with the implication that magic rituals can persist without efficacy because the intent is lost within the act. Alternatively, some cases of harmless "rituals" may have positive effects in bolstering intent, as may be the case with certain pre-game exercises in sports.Some scholars believe that magic is effective psychologically. They cite the placebo effect and psychosomatic disease as prime examples of how our mental functions exert power over our bodies. Similarly, Robin Horton suggests that engaging in magical practices surrounding healing can relieve anxiety, which could have a significant positive physical effect. In the absence of advanced health care, such effects would play a relatively major role, thereby helping to explain the persistence and popularity of such practices. Causes: Phenomenological approach Ariel Glucklich tries to understand magic from a subjective perspective, attempting to comprehend magic on a phenomenological, experientially based level. Glucklich seeks to describe the attitude that magical practitioners feel what he calls "magical consciousness" or the "magical experience". He explains that it is based upon "the awareness of the interrelatedness of all things in the world by means of simple but refined sense perception."Another phenomenological model is that of Gilbert Lewis, who argues that "habit is unthinking". He believes that those practicing magic do not think of an explanatory theory behind their actions any more than the average person tries to grasp the pharmaceutical workings of aspirin. When the average person takes an aspirin, he does not know how the medicine chemically functions. He takes the pill with the premise that there is proof of efficacy. Similarly, many who avail themselves of magic do so without feeling the need to understand a causal theory behind it. Social: Anthropology In religion, folk religion, and superstitious beliefs, the posited causality is between religious ritual, prayer, meditation, trances, sacrifice, incantation, curses, benediction, faith healing, or the observance of a taboo, and an expected benefit or recompense. The use of a lucky charm or ritual, for example, is assumed to increase the probability that one will perform at a level so that one can achieve a desired goal or outcome.Researchers have identified two possible principles as the formal causes of the attribution of false causal relationships: the temporal contiguity of two events "associative thinking", the association of entities based upon their resemblance to one anotherProminent Victorian theorists identified associative thinking (a common feature of practitioners of magic) as a characteristic form of irrationality. As with all forms of magical thinking, association-based and similarities-based notions of causality are not always said to be the practice of magic by a magician. For example, the doctrine of signatures held that similarities between plant parts and body parts indicated their efficacy in treating diseases of those body parts, and was a part of Western medicine during the Middle Ages. This association-based thinking is a vivid example of the general human application of the representativeness heuristic.Edward Burnett Tylor coined the term "associative thinking", characterizing it as pre-logical, in which the "magician's folly" is in mistaking an imagined connection with a real one. The magician believes that thematically linked items can influence one another by virtue of their similarity. For example, in E. E. Evans-Pritchard's account, members of the Azande tribe believe that rubbing crocodile teeth on banana plants can invoke a fruitful crop. Because crocodile teeth are curved (like bananas) and grow back if they fall out, the Azande observe this similarity and want to impart this capacity of regeneration to their bananas. To them, the rubbing constitutes a means of transference. Social: Sir James Frazer (1854–1941) elaborated upon Tylor's principle by dividing magic into the categories of sympathetic and contagious magic. The latter is based upon the law of contagion or contact, in which two things that were once connected retain this link and have the ability to affect their supposedly related objects, such as harming a person by harming a lock of his hair. Sympathetic magic and homeopathy operate upon the premise that "like affects like", or that one can impart characteristics of one object to a similar object. Frazer believed that some individuals think the entire world functions according to these mimetic, or homeopathic, principles.In How Natives Think (1925), Lucien Lévy-Bruhl describes a similar notion of mystical, "collective representations". He too sees magical thinking as fundamentally different from a Western style of thought. He asserts that in these representations, "primitive" people's "mental activity is too little differentiated for it to be possible to consider ideas or images of objects by themselves apart from the emotions and passions which evoke those ideas or are evoked by them". Lévy-Bruhl explains that the indigenous people commit the post hoc, ergo propter hoc fallacy, in which people observe that x is followed by y, and conclude that x has caused y. He believes that this fallacy is institutionalized in native culture and is committed regularly and repeatedly. Social: Despite the view that magic is less than rational and entails an inferior concept of causality, in The Savage Mind (1966), Claude Lévi-Strauss suggested that magical procedures are relatively effective in exerting control over the environment. This outlook has generated alternative theories of magical thinking, such as the symbolic and psychological approaches, and softened the contrast between "educated" and "primitive" thinking: "Magical thinking is no less characteristic of our own mundane intellectual activity than it is of Zande curing practices." Cultural differences Robin Horton maintains that the difference between the thinking of Western and of non-Western peoples is predominantly "idiomatic". He says that the members of both cultures use the same practical common-sense, and that both science and magic are ways beyond basic logic by which people formulate theories to explain whatever occurs. However, non-Western cultures use the idiom of magic and have community spiritual figures, and therefore non-Westerners turn to magical practices or to a specialist in that idiom. Horton sees the same logic and common-sense in all cultures, but notes that their contrasting ontological idioms lead to cultural practices which seem illogical to observers whose own culture has correspondingly contrasting norms. He explains, "[T]he layman's grounds for accepting the models propounded by the scientist are often no different from the young African villager's ground for accepting the models propounded by one of his elders."Along similar lines, Michael F. Brown argues that the Aguaruna of Peru see magic as a type of technology, no more supernatural than their physical tools. Brown says that the Aguaruna utilize magic in an empirical manner; for example, they discard any magical stones which they have found to be ineffective. To Brown—as to Horton—magical and scientific thinking differ merely in idiom. These theories blur the boundaries between magic, science, and religion, and focus on the similarities in magical, technical, and spiritual practices. Brown even ironically writes that he is tempted to disclaim the existence of 'magic.'One theory of substantive difference is that of the open versus closed society. Horton describes this as one of the key dissimilarities between traditional thought and Western science. He suggests that the scientific worldview is distinguished from a magical one by the scientific method and by skepticism, requiring the falsifiability of any scientific hypothesis. He notes that for native peoples "there is no developed awareness of alternatives to the established body of theoretical texts." He notes that all further differences between traditional and Western thought can be understood as a result of this factor. He says that because there are no alternatives in societies based on magical thought, a theory does not need to be objectively judged to be valid. In children: According to Jean Piaget's Theory of Cognitive Development, magical thinking is most prominent in children between ages 2 and 7. Due to examinations of grieving children, it is said that during this age, children strongly believe that their personal thoughts have a direct effect on the rest of the world. It is posited that their minds will create a reason to feel responsible if they experience something tragic that they do not understand, e.g. a death. Jean Piaget, a developmental psychologist, came up with a theory of four developmental stages. Children between ages 2 and 7 would be classified under his preoperational stage of development. During this stage children are still developing their use of logical thinking. A child's thinking is dominated by perceptions of physical features, meaning that if the child is told that a family pet has "gone away to a farm" when it has in fact died, then the child will have difficulty comprehending the transformation of the dog not being around anymore. Magical thinking would be evident here, since the child may believe that the family pet being gone is just temporary. Their young minds in this stage do not understand the finality of death and magical thinking may bridge the gap. In children: Grief It was discovered that children often feel that they are responsible for an event or events occurring or are capable of reversing an event simply by thinking about it and wishing for a change: namely, "magical thinking". Make-believe and fantasy are an integral part of life at this age and are often used to explain the inexplicable.According to Piaget, children within this age group are often "egocentric", believing that what they feel and experience is the same as everyone else's feelings and experiences. Also at this age, there is often a lack of ability to understand that there may be other explanations for events outside of the realm of things they have already comprehended. What happens outside their understanding needs to be explained using what they already know, because of an inability to fully comprehend abstract concepts.Magical thinking is found particularly in children's explanations of experiences about death, whether the death of a family member or pet, or their own illness or impending death. These experiences are often new for a young child, who at that point has no experience to give understanding of the ramifications of the event. A child may feel that they are responsible for what has happened, simply because they were upset with the person who died, or perhaps played with the pet too roughly. There may also be the idea that if the child wishes it hard enough, or performs just the right act, the person or pet may choose to come back, and not be dead any longer.When considering their own illness or impending death, some children may feel that they are being punished for doing something wrong, or not doing something they should have, and therefore have become ill. If a child's ideas about an event are incorrect because of their magical thinking, there is a possibility that the conclusions the child makes could result in long-term beliefs and behaviours that create difficulty for the child as they mature. Related terms: "Quasi-magical thinking" describes "cases in which people act as if they erroneously believe that their action influences the outcome, even though they do not really hold that belief". People may realize that a superstitious intuition is logically false, but act as if it were true because they do not exert an effort to correct the intuition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Write once read many** Write once read many: Write once read many (WORM) describes a data storage device in which information, once written, cannot be modified. This write protection affords the assurance that the data cannot be tampered with once it is written to the device, excluding the possibility of data loss from human error, computer bugs, or malware. Write once read many: On ordinary (non-WORM) data storage devices, the number of times data can be modified is limited only by the lifespan of the device, as modification involves physical changes that may cause wear to the device. The "read many" aspect is unremarkable, as modern storage devices permit unlimited reading of data once written.WORM protects the important files by keeping them safe and intact. It ensures the highest level of integrity and data security by eliminating the risk of important data from being deleted or modified. This way, the WORM helps to preserve the authenticity and safety of recorded data. History: WORM drives preceded the invention of the CD-R, DVD-R and BD-R. An example was the IBM 3363. These drives typically used a 12 in (30 cm) disk in a cartridge, with an ablative optical layer that could be written to only once, and were often used in places like libraries that needed to store large amounts of data. Interfaces to connect these to PCs also existed. History: Punched cards and paper tape are obsolete WORM media. Although any unpunched area of the medium could be punched after the first write of the medium, doing so was virtually never useful. Read-only memory (ROM) is also a WORM medium. Such memory may contain the instructions to a computer to read the operating system from another storage device such as a hard disk. The non-technical end-user, however, cannot write the ROM even once but considers it part of the unchangeable computing platform. History: WORM was utilized for Broker-dealer records within the Financial Industry Regulatory Authority and the U.S. Securities and Exchange Commission. Current WORM drives: The CD-R, DVD-R and BD-R optical discs for computers are common WORM devices. On these discs, no region of the disc can be recorded a second time. Through packet writing, which uses the Universal Disk Format (UDF) file system, these discs often use a file system that permits additional files, and even revised versions of a file by the same name, to be recorded in a different region of the disc. To the user, the disc appears to allow additions and revisions until all the disk space is used. Current WORM drives: The SD card and microSD card spec allows for multiple forms of write-protection. The most common form, only available when using a full-size SD card, provides a physical write protection switch which allows the user to advise the host card reader to disallow write access. This does not protect the data on the card if the card reader hardware is not built to respect the write protection switch.Multiple vendors beginning in the early 2000s developed Magnetic WORM devices. These archival grade storage devices utilize a variation of RAID and magnetic storage technologies to secure data from unauthorized alteration or modification at both the hardware and software levels. As the cost of magnetic (and solid-state) storage has decreased, so has the cost for these archival storage technologies. These technologies are almost always integrated directly into a content/document management system that manages retention schedules and access controls, along with document level history.There are multiple vendors providing Magnetic Storage technologies including NetApp, EMC Centera, KOM Networks, and others. In 2013, GreenTec-USA, Inc. developed WORM hard disk drives in capacities of 3 TB and greater. Prevention of rewrite is done at the physical disk level and cannot be modified or overridden by the attached computer. Research: In recent years there has been a renewed interest in WORM based on organic components, such as PEDOT:PSS or other polymers such as PVK or PCz. Organic WORM devices, considered organic memory, could be used as memory elements for low-power RFID tags.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Y alloy** Y alloy: Y alloy is a nickel-containing aluminium alloy. It was developed by the British National Physical Laboratory during World War I, in an attempt to find an aluminium alloy that would retain its strength at high temperatures.Duralumin, an aluminium alloy containing 4% copper was already known at this time. Its strength, and its previously unknown age hardening behaviour had made it a popular choice for zeppelins. Aircraft of the period were largely constructed of wood, but there was a need for an aluminium alloy suitable for making engines, particularly pistons, that would have the strength of duralumin but could retain this when in service at high temperatures for long periods. Y alloy: The National Physical Laboratory began a series of experiments to study new aluminium alloys. Experimental series "Y" was successful, and gave its name to the new alloy. Like duralumin, this was a 4% copper alloy, but with the addition of 2% nickel and 1.5% magnesium. This addition of nickel was an innovation for aluminium alloys. These alloys are one of the three main groups of high-strength aluminium alloys, the nickel–aluminium alloys having the advantage of retaining strength at high temperatures. Y alloy: The alloy was first used in the cast form, but was soon used for forging as well. One of the most pressing needs was to develop reliable pistons for aircraft engines. The first experts at forging this alloy were Peter Hooker Limited of Walthamstow, who were better known as The British Gnôme and Le Rhône Engine Co. They license-built the Gnome engine and fitted it with pistons of Y alloy, rather than their previous cast iron. These pistons were highly successful, although impressions of the alloy as a panacea suitable for all applications were less successful; a Gnôme cylinder in Y alloy failed on its first revolution. Frank Halford used connecting rods of this alloy for his de Havilland Gipsy engine, but these other uses failed to impress Rod Banks.Air Ministry Specification D.T.D 58A of April 1927 specified the composition and heat treatment of wrought Y alloy. The alloy became extremely important for pistons, and for engine components in general, but was little used for structural members of airframes.In the late 1920s, further research on nickel-aluminium alloys gave rise to the successful Hiduminium or "R.R. alloys", developed by Rolls-Royce. Alloy composition: Heat treatment As for many of the aluminium alloys, Y alloy age hardens spontaneously at normal temperatures after solution heat treating. The heat treatment is to heat it to 500 to 520 °C (932 to 968 °F) for 6 hours, then to allow it to age naturally for 7–10 days. The precipitation hardening that takes place during this ageing forms precipitates of both CuAl2 and NiAl3.The times required depend on the grain structure of the alloy. Forged parts have the coarsest eutectic masses and so take the longest times. When cast, chill casting is favoured over sand casting as this gives a finer structure that is more amenable to heat treatment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maturity (sedimentology)** Maturity (sedimentology): In sedimentary geology, maturity describes the composition and texture of grains in clastic rocks, most typically sandstones, resulting from different amounts of sediment transportation. A sediment is mature when the grains in a sediment become well-sorted and well-rounded due to weathering or abrasion of the grains during transport. There are two components to describe maturity, texture and composition. Texture describes how rounded and sorted the sample is while composition describes how much the composition trends toward stable minerals and components (often quartz). A mature sediment is more uniform in appearance, for the sediment grains are well rounded, are of a similar size and exhibit little compositional variation. Conversely, an immature sediment contains more angular grains, diverse grain sizes, and is compositionally diverse.As the sediment is transported, the unstable minerals are abraded or dissolved to leave more stable minerals, such as quartz. Mature sediments, which contain stable minerals, generally have a smaller variety of minerals than immature sediments, which can contain both stable and unstable minerals. One measure of this maturity is the ZTR index which is a measure of the common resistant minerals found in ultra-weathered sediments: zircon, tourmaline, and rutile. Maturity (sedimentology): A sediment sample from the lower (downstream) portions of a stream is likely to be more mature than one found upstream, since the original sediment has been subject to more abrasion as it travels downstream.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ulrich K. Laemmli** Ulrich K. Laemmli: Ulrich K. Laemmli, real name Lämmli, is a Professor in the biochemistry and molecular biology departments at University of Geneva. He is known for the refinement of SDS-PAGE, a widely used method for separating proteins based on their electrophoretic mobility. His paper describing the method is among the most cited scholarly journal articles of all time. His current research involves studying the structural organization of nuclei and chromatin within the cell. Major scientific contributions: Although electrophoresis was used to separate proteins before Laemmli's work, he made significant improvements to the method. The term "Laemmli buffer" is often used to describe an SDS-containing buffer that is used to prepare (denature) samples for SDS-PAGE. Awards and honors: Louis-Jeantet Prize for Medicine – 1996 Elected fellow of the American Association for the Advancement of Science – 2006
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tyrosemiophilia** Tyrosemiophilia: Tyrosemiophilia is the hobby of collecting cheese labels. As of May 2019, the world's largest collection encompasses 250,655 label designs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beam diameter** Beam diameter: The beam diameter or beam width of an electromagnetic beam is the diameter along any specified line that is perpendicular to the beam axis and intersects it. Since beams typically do not have sharp edges, the diameter can be defined in many different ways. Five definitions of the beam width are in common use: D4σ, 10/90 or 20/80 knife-edge, 1/e2, FWHM, and D86. The beam width can be measured in units of length at a particular plane perpendicular to the beam axis, but it can also refer to the angular width, which is the angle subtended by the beam at the source. The angular width is also called the beam divergence. Beam diameter: Beam diameter is usually used to characterize electromagnetic beams in the optical regime, and occasionally in the microwave regime, that is, cases in which the aperture from which the beam emerges is very large with respect to the wavelength. Beam diameter: Beam diameter usually refers to a beam of circular cross section, but not necessarily so. A beam may, for example, have an elliptical cross section, in which case the orientation of the beam diameter must be specified, for example with respect to the major or minor axis of the elliptical cross section. The term "beam width" may be preferred in applications where the beam does not have circular symmetry. Definitions: Rayleigh beamwidth The angle between the maximum peak of radiated power and the first null (no power radiated in this direction) is called the Rayleigh beamwidth. Definitions: Full width at half maximum The simplest way to define the width of a beam is to choose two diametrically opposite points at which the irradiance is a specified fraction of the beam's peak irradiance, and take the distance between them as a measure of the beam's width. An obvious choice for this fraction is ½ (−3 dB), in which case the diameter obtained is the full width of the beam at half its maximum intensity (FWHM). This is also called the half-power beam width (HPBW). Definitions: 1/e2 width The 1/e2 width is equal to the distance between the two points on the marginal distribution that are 1/e2 = 0.135 times the maximum value. In many cases, it makes more sense to take the distance between points where the intensity falls to 1/e2 = 0.135 times the maximum value. If there are more than two points that are 1/e2 times the maximum value, then the two points closest to the maximum are chosen. The 1/e2 width is important in the mathematics of Gaussian beams, in which the intensity profile is described by exp (−2r2w2) The American National Standard Z136.1-2007 for Safe Use of Lasers (p. 6) defines the beam diameter as the distance between diametrically opposed points in that cross-section of a beam where the power per unit area is 1/e (0.368) times that of the peak power per unit area. This is the beam diameter definition that is used for computing the maximum permissible exposure to a laser beam. In addition, the Federal Aviation Administration also uses the 1/e definition for laser safety calculations in FAA Order JO 7400.2, Para. 29-1-5d.Measurements of the 1/e2 width only depend on three points on the marginal distribution, unlike D4σ and knife-edge widths that depend on the integral of the marginal distribution. 1/e2 width measurements are noisier than D4σ width measurements. For multimodal marginal distributions (a beam profile with multiple peaks), the 1/e2 width usually does not yield a meaningful value and can grossly underestimate the inherent width of the beam. For multimodal distributions, the D4σ width is a better choice. For an ideal single-mode Gaussian beam, the D4σ, D86 and 1/e2 width measurements would give the same value. Definitions: For a Gaussian beam, the relationship between the 1/e2 width and the full width at half maximum is ln 1.699 ×FWHM , where 2w is the full width of the beam at 1/e2. Definitions: D4σ or second-moment width The D4σ width of a beam in the horizontal or vertical direction is 4 times σ, where σ is the standard deviation of the horizontal or vertical marginal distribution respectively. Mathematically, the D4σ beam width in the x dimension for the beam profile I(x,y) is expressed as D4σ=4σ=4∫−∞∞∫−∞∞I(x,y)(x−x¯)2dxdy∫−∞∞∫−∞∞I(x,y)dxdy, where x¯=∫−∞∞∫−∞∞I(x,y)xdxdy∫−∞∞∫−∞∞I(x,y)dxdy is the centroid of the beam profile in the x direction. Definitions: When a beam is measured with a laser beam profiler, the wings of the beam profile influence the D4σ value more than the center of the profile, since the wings are weighted by the square of its distance, x2, from the center of the beam. If the beam does not fill more than a third of the beam profiler's sensor area, then there will be a significant number of pixels at the edges of the sensor that register a small baseline value (the background value). If the baseline value is large or if it is not subtracted out of the image, then the computed D4σ value will be larger than the actual value because the baseline value near the edges of the sensor are weighted in the D4σ integral by x2. Therefore, baseline subtraction is necessary for accurate D4σ measurements. The baseline is easily measured by recording the average value for each pixel when the sensor is not illuminated. The D4σ width, unlike the FWHM and 1/e2 widths, is meaningful for multimodal marginal distributions — that is, beam profiles with multiple peaks — but requires careful subtraction of the baseline for accurate results. The D4σ is the ISO international standard definition for beam width. Definitions: Knife-edge width Before the advent of the CCD beam profiler, the beam width was estimated using the knife-edge technique: slice a laser beam with a razor and measure the power of the clipped beam as a function of the razor position. The measured curve is the integral of the marginal distribution, and starts at the total beam power and decreases monotonically to zero power. The width of the beam is defined as the distance between the points of the measured curve that are 10% and 90% (or 20% and 80%) of the maximum value. If the baseline value is small or subtracted out, the knife-edge beam width always corresponds to 60%, in the case of 20/80, or 80%, in the case of 10/90, of the total beam power no matter what the beam profile. On the other hand, the D4σ, 1/e2, and FWHM widths encompass fractions of power that are beam-shape dependent. Therefore, the 10/90 or 20/80 knife-edge width is a useful metric when the user wishes to be sure that the width encompasses a fixed fraction of total beam power. Most CCD beam profiler's software can compute the knife-edge width numerically. Definitions: Fusing knife-edge method with imaging The main drawback of the knife-edge technique is that the measured value is displayed only on the scanning direction, minimizing the amount of relevant beam information. To overcome this drawback, an innovative technology offered commercially allows multiple directions beam scanning to create an image like beam representation.By mechanically moving the knife edge across the beam, the amount of energy impinging the detector area is determined by the obstruction. The profile is then measured from the knife-edge velocity and its relation to the detector's energy reading. Unlike other systems, a unique scanning technique uses several different oriented knife-edges to sweep across the beam. By using tomographic reconstruction, mathematical processes reconstruct the laser beam size in different orientations to an image similar to the one produced by CCD cameras. The main advantage of this scanning method is that it is free from pixel size limitations (as in CCD cameras) and allows beam reconstructions with wavelengths not usable with existing CCD technology. Reconstruction is possible for beams in deep UV to far IR. Definitions: D86 width The D86 width is defined as the diameter of the circle that is centered at the centroid of the beam profile and contains 86% of the beam power. The solution for D86 is found by computing the area of increasingly larger circles around the centroid until the area contains 0.86 of the total power. Unlike the previous beam width definitions, the D86 width is not derived from marginal distributions. The percentage of 86, rather than 50, 80, or 90, is chosen because a circular Gaussian beam profile integrated down to 1/e2 of its peak value contains 86% of its total power. The D86 width is often used in applications that are concerned with knowing exactly how much power is in a given area. For example, applications of high-energy laser weapons and lidars require precise knowledge of how much transmitted power actually illuminates the target. Definitions: ISO11146 beam width for elliptic beams The definition given before holds for stigmatic (circular symmetric) beams only. For astigmatic beams, however, a more rigorous definition of the beam width has to be used: dσx=22(⟨x2⟩+⟨y2⟩+γ((⟨x2⟩−⟨y2⟩)2+4⟨xy⟩2)1/2)1/2 and dσy=22(⟨x2⟩+⟨y2⟩−γ((⟨x2⟩−⟨y2⟩)2+4⟨xy⟩2)1/2)1/2. This definition also incorporates information about x–y correlation ⟨xy⟩ , but for circular symmetric beams, both definitions are the same. Some new symbols appeared within the formulas, which are the first- and second-order moments: ⟨x⟩=1P∫I(x,y)xdxdy, ⟨y⟩=1P∫I(x,y)ydxdy, ⟨x2⟩=1P∫I(x,y)(x−⟨x⟩)2dxdy, ⟨xy⟩=1P∫I(x,y)(x−⟨x⟩)(y−⟨y⟩)dxdy, ⟨y2⟩=1P∫I(x,y)(y−⟨y⟩)2dxdy, the beam power P=∫I(x,y)dxdy, and sgn ⁡(⟨x2⟩−⟨y2⟩)=⟨x2⟩−⟨y2⟩|⟨x2⟩−⟨y2⟩|. Using this general definition, also the beam azimuthal angle ϕ can be expressed. It is the angle between the beam directions of minimal and maximal elongations, known as principal axes, and the laboratory system, being the x and y axes of the detector and given by arctan ⁡2⟨xy⟩⟨x2⟩−⟨y2⟩. Measurement: International standard ISO 11146-1:2005 specifies methods for measuring beam widths (diameters), divergence angles and beam propagation ratios of laser beams (if the beam is stigmatic) and for general astigmatic beams ISO 11146-2 is applicable. The D4σ beam width is the ISO standard definition and the measurement of the M² beam quality parameter requires the measurement of the D4σ widths.The other definitions provide complementary information to the D4σ. The D4σ and knife-edge widths are sensitive to the baseline value, whereas the 1/e2 and FWHM widths are not. The fraction of total beam power encompassed by the beam width depends on which definition is used. Measurement: The width of laser beams can be measured by capturing an image on a camera, or by using a laser beam profiler.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide)** Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide): Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide) (4-673-745-01) is an extremely potent carbamate nerve agent. It works by inhibiting the acetylcholinesterase, causing acetylcholine to accumulate. Since the agent molecule is positively charged, it does not cross the blood brain barrier very well. Toxicity: Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide) is an extremely toxic nerve agent that can be lethal even at extremely low doses. The LD50 in mice and rabbits is 16 μg/kg and 6 μg/kg, respectively. Synthesis: 5-Hydroxyisoquinoline and dimethylcarbamoyl chloride is heated on a steam bath for 2 hours. The mixture is then cooled and treated with benzene. The resulting solid is then dissolved in water. Sodium hydroxide is added to make the solution basic. The solution is extracted with chloroform and then dried with magnesium sulfate. The solvent is evaporated and the solid residue is then recrystallized from petroleum ether. The resulting product, 5-dimethylcarbamoxyisoquinoline, is then mixed with 1,8-dibromooctane in acetonitrile and refluxed for 8 hours. After cooling, the precipitate is filtered and recrystallized from acetonitrile. The product is then dried in vacuo for 14 hours at room temperature, resulting in the final product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lanthanum ytterbium oxide** Lanthanum ytterbium oxide: Lanthanum ytterbium oxide is a solid inorganic compound of lanthanum, ytterbium and oxygen with the chemical formula of LaYbO3. This compound adopts the Perovskite structure. Synthesis: LaYbO3 is not a naturally occurring mineral but it can prepared by solid state reaction between La2O3 and Yb2O3 at temperatures around 1200 °C. Single-crystals of LaYbO3 can also be grown of a molten hydroxide flux at 750 °C in sealed silver tubes. Thin films of LaYbO3 have also been fabricated by pulsed laser deposition. Structure: LaYbO3 and other LaREO3 oxides (where RE=Ho, Y, Er, Tm, Yb, and Lu) compounds have an orthorhombic crystal structure with an internal symmetry described by the Pnma space group. The structure can be described by slightly distorted YbO6 octahedra tilted in the a−b+a− configuration according to Glazer's notation and antiparallel displaced La3+ ions. The rotation of the YbO6 octahedra reduces the coordination number of the La from 12 to 8. Structure: It exhibits exhibit a negative thermal expansion along the a and b axes. Physical Properties: LaYbO3 exhibits a room-temperature permittivity, ɛr, of ~26, which decreases slightly to 25 at 10 K. LaYbO3 shows antiferromagnetic ordering with a weak ferromagnetism at 2.7 K. LaYbO3-based perovskites are also known to show proton conductivity at intermediate temperatures (600-800 °C).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catch bond** Catch bond: A catch bond is a type of noncovalent bond whose dissociation lifetime increases with tensile force applied to the bond. Normally, bond lifetimes are expected to diminish with force. In the case of catch bonds, the lifetime of the bond actually increases up to a maximum before it decreases like in a normal bond. Catch bonds work in a way that is conceptually similar to that of a Chinese finger trap. While catch bonds are strengthened by an increase in force, the force increase is not necessary for the bond to work. Catch bonds were suspected for many years to play a role in the rolling of leukocytes, being strong enough to roll in presence of high forces caused by high shear stresses, while avoiding getting stuck in capillaries where the fluid flow, and therefore shear stress, is low. The existence of catch bonds was debated for many years until strong evidence of their existence was found in bacteria. Definite proof of their existence came shortly thereafter in leukocytes. Discovery: Catch bonds were first proposed in 1988 in the Proceedings of the Royal Society by M. Dembo et al. while at Los Alamos National Laboratory. While developing molecular model to study the critical tension required to detach a membrane bound to a surface through adhesion molecules, it was found that it is theoretically possible for bond dissociation to be increased by force, decreased by force, and independent of force. The terms "slip bond", "catch bond", and "ideal bond" were coined by Dembo to describe these three types of bond behaviors.Slip bonds represent the ordinary behavior originally modeled by G. Bell, Dembo's former postdoctoral mentor at Los Alamos National Laboratory in 1978. Slip bonds were supported by flow chamber experiments where forces are applied on molecular bonds linking cells to chamber floor under shear flow. By comparison, no decisive evidence of catch bonds was found until 2003. This is due to experimental conditions that were unfavorable for detecting catch bonds, as well as the counterintuitive nature of the bonds themselves. For example, most early experiments were conducted in 96 well plates, an environment that does not provide any flow. Some experiments failed to produce shear stress that is now known to be critical to lengthen the lifetimes of catch bonds, while other experiments conducted under flow conditions too weak or too strong for optimal shear-induced strengthening of these bonds. Finally, Marshall and coworkers found that P-selectin:PSGL-1 bonds exhibited increasing bond lifetime as step loads were applied between 0 and ~10 pN for monomeric interaction but 1 and ~20 pN for dimeric interaction, exhibiting catch bond behavior; after reaching maximum values, which were ~0.6 and 1.2 seconds for monomeric and dimeric interaction, respectively, the bond lifetime fell rapidly at higher loads, displaying slip bond behavior ("catch-slip" bonds). These data were collected using an atomic force microscope and a flow chamber, and have subsequently been duplicated using a biomembrane force probe.These finding prompted the discoveries of other important catch bonds in the 2000s, including those between L-selectin and PSGL-1 or endoglycan, FimH and mannose, myosin and actin, platelet glycoprotein Ib and von Willebrand factor, and integrin alpha 5 beta 1 and fibronectin. Emphasizing their importance and general acceptance, in the three years following their discovery there were at least 24 articles published on catch bonds. Discovery: More catch bonds were discovered in the 2010s, including E-selectin with carbohydrate ligands, G-actin with G-actin or F-actin, cadherin-catenin complex with actin, vinculin with F-actin, microtubule with kinetochore particle, integrin alpha L beta 2 and intercellular adhesion molecule 1 (ICAM-1), integrin alpha 4 beta 1 with vascular adhesion molecule 1, integrin alpha M beta 2 with ICAM-1, integrin alpha V beta 3 with fibronectin, and integrin alpha IIb beta 3 with fibronectin or fibrinogen.Sivasankar and his research team have found that the mechanism behind the puzzling phenomenon is due to long-lived, force-induced hydrogen bonds. Using data from previous experiments, the team used molecular dynamics to discover that two rod-shaped cadherins in an X-dimer formed catch bonds when pulled and in the presence of calcium ions. The calcium ions keep the cadherins rigid, while pulling brings the proteins closer together, allowing for hydrogen bonds to form. The mechanism behind catch bonds helps to explain the biophysics behind cell-cell adhesion. According to the researchers, "Robust cadherin adhesion is essential for maintaining the integrity of tissue such as the skin, blood vessels, cartilage and muscle that are exposed to continuous mechanical assault." The above catch bonds are formed between adhesion receptors and ligands, and among structural molecules and motor proteins, which bear force or generate force in their physiological function. An interesting recent development is the discoveries of catch bonds formed between signaling receptors and their ligands. These includes bonds between T cell antigen receptors (TCR) or pre-TCR and peptide presented by major histocompatibility complex (pMHC) molecules, Fc gamma receptor and IgG Fc, and notch receptor and ligands. The presence of catch bonds in the interactions of these signaling (rather than adhesion) receptors have been suggested to be indicative of a possible role of these receptors as mechanoreceptors. Variations and related dynamic bonds: Triphasic bonds Other type of "dynamic bonds" have been defined in additional to the original types of catch bonds, slip bonds and ideal bonds classified by Dembo. Unlike slip bonds, which have been observed in the entire force range tested, catch bonds only exist within certain force range as any molecular bond would eventually be overpowered by high enough force. Therefore, catch bonds are always followed by slip bonds, hence termed "catch-slip bonds". More variations have also been observed, e.g., triphasic slip-catch-slip bonds. Variations and related dynamic bonds: Flex bonds The transition between catch and slip bonds have been modeled as molecular dissociation from two bond states along two pathways. Dissociation along each pathway alone results in a slip bond but at different rates. At low forces, dissociation occurs predominately along the fast pathway. Increasing force tilts the multi-dimensional energy landscape to switch the dissociation from fast pathway to slow pathway, manifesting catch bond. As dissociation along the slow pathway dominates, further increase in force accelerates dissociation, manifesting slip bond. This switching behavior is also called flex bond. Variations and related dynamic bonds: Dynamic catch The above bonds involve bimolecular interactions, which arguably represents the simplest types. A new type of catch bonds emerges when trimolecular interactions are involved. In such cases, one molecule can interact with the two counter-molecules using two binding sites, either separately, i.e. one at a time in the absence of the other to form bimolecular bonds, or concurrently to form a trimolecular bond when both counter-molecules are present. An interesting finding is that even when the two bimolecular interactions behave as slip bonds, the trimolecular interaction can behave as catch bond. This new type of catch bond, which requires concurrent and cooperative binding, is termed dynamic catch. Variations and related dynamic bonds: Cyclic mechanical reinforcement Most catch bonds were demonstrated using force-clamp force spectroscopy where upon initial ramping, a constant force is loaded on the bond to observe how long the bond lasts, i.e., measuring the bond lifetime at a constant force. Catch bonds are revealed when the mean bond lifetime (reciprocally related to the rate of bond dissociation) increases with the clamped force. Zhu and colleagues demonstrated that bond lifetime measured at the force-clamp phase could be substantially prolonged if the initial ramping included two forms of pre-conditioning: 1) loading the bond by ramping the force to a high level (peak force) before clamping the force at a low level for lifetime measurement, and 2) loading and unloading the bond repeatedly by multiple force cycles before clamping the force at a peak value for lifetime measurement. This new bond type, termed cyclic mechanical reinforcement (CMR), is distinct from catch bond, but it nevertheless resembles catch bond in that the bond lifetime increases with the peak force and with the number of cycles used to pre-condition the bond. CMR has been observed for interactions between integrin alpha 5 beta 1 and fibronectin and between G-actin and G-actin or F-actin. Variations and related dynamic bonds: Force history dependence The CMR phenomenon indicates that how long a bond can sustain force at a given level can depend on the history of force application prior to arriving at that force level. In other words, the "rate constant" of molecular dissociation at a constant force depends not only on the value of force at the current time but also on the prior force history the bond has experienced in the past. This has indeed been observed for interactions of P-selectin with PSGL-1 or anti-P-selectin antibody, L-selectin with PSGL-1, myosin with actin, integrin alpha V beta 3 with fibrinogen, and TCR with pMHC. Various catch bonds of specific molecular interactions: Selectin bond Background Leukocytes, as well as other types of white blood cells, normally form weak and short-lived bonds with other cells via selectin. Coated outside the membrane of leukocytes are microvilli, which have various types of adhesive molecules, including P-selectin glycoprotein ligand-1 (PSGL-1), a glycoprotein that is normally decorated with sulfated sialyl-Lewis x. the sulfated-sialyl-Lewis-x-contained PSGL-1 molecule has the ability to bind to any type of selectin. Leukocytes also exhibit L-selectin that binds to other cells or other leukocytes that contain PSGL-1 molecules. Various catch bonds of specific molecular interactions: An important example of catch bonds is their role in leukocyte extravasation. During this process, leukocytes move through the circulatory system to sites of infection, and in doing so they 'roll' and bind to selectin molecules on the vessel wall. While able to float freely in the blood under normal circumstances, shear stress induced by inflammation causes leukocytes to attach to the endothelial vessel wall and begin rolling rather than floating downstream. This “shear-threshold phenomenon” was initially characterized in 1996 by Finger et al. who showed that leukocyte binding and rolling through L-selectin is only maintained when a critical shear-threshold is applied to the system. Multiple sources of evidence have shown that catch bonds are responsible for the tether and roll mechanism that allows this critical process to occur. Catch bonds allow increasing force to convert short-lived tethers into stronger, longer-lived binding interactions, thus decreasing the rolling velocity and increasing the regularity of rolling steps. However, this mechanism only works at an optimal force. As shear force increases past this force, bonds revert to slip bonds, creating an increase in velocity and irregularity of rolling. Various catch bonds of specific molecular interactions: Leukocytes adhesion mediated by shear stress In blood vessel, at very low shear stress of ~.3 dynes per squared centimeter, leukocytes do not adhere to the blood vessel endothelial cells. Cells move along the blood vessel at a rate proportional to the blood flow rate. Once the shear stress pass that shear threshold value, leukocytes start to accumulate via selectin binding. At low shear stress above the threshold of about .3 to 5 dynes per squared centimeter, leukocytes alternate between binding and non-binding. Because one leukocyte has many selectins around the surface, these selectin binding/ unbinding cause a rolling motion on the blood vessel. As the shear stress continue to increase, the selectin bonds becomes stronger, causing the rolling velocity to be slower. This reduction in leukocytes rolling velocity allow cells to stop and perform firm binding via integrin binding. Selectin binding do not exhibit "true" catch bond property. Experiments show that at very high shear stress (passing a second threshold), the selectin binding transit between a catch bond to a slip bond binding, in which the rolling velocity increases as the shear force increases. Various catch bonds of specific molecular interactions: Leukocyte rolling mediated by catch-slip transition Researchers have hypothesized that the ability of leukocytes to maintain attachment and rolling on the blood vessel wall can be explained by a combination of many factors, including cell flattening to maintain a larger binding surface-area and reduce hydrodynamic drag, as well as tethers holding the rear of the rolling cell to the endothelium breaking and slinging to the front of the rolling cell to reattach to the endothelial wall. These hypotheses work well with Marshall's 2003 findings that selectin bonds go through a catch-slip transition in which initial increases in shear force strengthen the bond, but with enough applied force bond lifetimes begin to decay exponentially. Therefore, the weak binding of a sling at the leading edge of a rolling leukocyte would initially be strengthened as the cell rolls farther and the tension on the bond increases, preventing the cell from dissociating from the endothelial wall and floating freely in the bloodstream despite high shear forces. However, at the trailing edge of the cell, tension becomes high enough to transition the bond from catch to slip, and the bonds tethering the trailing edge eventually break, allowing the cell to roll further instead of remaining stationary. Various catch bonds of specific molecular interactions: Proposed mechanisms of action Allosteric model Though catch bonds are now widely recognized, their mechanism of action is still under dispute. Various catch bonds of specific molecular interactions: Two leading hypotheses dominate the discussion. The first hypothesis, the allosteric model, stems from evidence that x-ray crystallography of selectin proteins shows two conformational states: a bent conformation in the absence of ligand, and an extended conformation in the presence of the ligand. The main domains involved in these states are a lectin domain which contains the ligand binding site and an EGF domain which can shift between bent and extended conformations. The allosteric model claims that tension on the EGF domain favors the extended conformation, and extension of this domain causes a conformational shift in the lectin domain, resulting in greater binding affinity for the ligand. As a result of this conformational change, the ligand is effectively locked in place despite tension exerted on the bond. Various catch bonds of specific molecular interactions: Sliding-rebinding model The sliding-rebinding model differs from the allosteric model in that the allosteric model posits that only one binding site exists and can be altered, but the sliding-rebinding model states that multiple binding sites exist and aren't changed by EGF extension. Rather, in the bent conformation which is favored at low applied forces, the applied force is perpendicular to the line of possible binding sites. Thus, when the association between ligand and lectin domain is interrupted, the bond quickly dissociates. At larger applied forces, however, the protein is extended and the line of possible binding sites is aligned with the applied force, allowing the ligand to quickly re-associate with a new binding site after the initial interaction is disrupted. With multiple binding sites, and even the ability to re-associate with the original binding site, the rate of ligand dissociation would be decreased as is typical of catch bonds. Various catch bonds of specific molecular interactions: Mechanism of a single selectin binding A single PSGL-1 and selectin binding is similar to conventional protein binding when the force is kept constant, with a dissociation constant. As the force exerted starts to increase, the dissociation constant decreases, causing binding to become stronger. As the force reach a threshold level of 11 pN, the dissociation constant starts to increase again, weakening the bond, causing the bond to exhibit a slip bond property. Various catch bonds of specific molecular interactions: FimH bond Background Catch bonds also play a significant role in bacterial adhesion, most notably in Escherichia coli. E. coli and other bacteria residing in the intestine must be able to adhere to intestinal walls or risk being eliminated from the body through defecation. This is possible due to the bacterial protein FimH, which mediates high adhesion in response to high flow. The lectin domain is one that provides FimH binding the catch bond property when binding to mannose residues from other cells. Experiments have shown that when force is loaded rapidly, bonds were able to survive high forces, thus pointing to catch bond behavior. Catch bonds are responsible for the failure of E. coli in the urinary tract to be eliminated during urination, thus leading to a urinary tract infection. This knowledge is important not only in understanding bacteria, but also for learning how anti-adhesive technologies can be created. Various catch bonds of specific molecular interactions: Bacteria adhesion mediated by shear stress Similar to selectin binding, FimH binding also have a threshold where it only starts binding to the host cells above this threshold. This shear stress threshold is about 1 dynes per squared centimeter, slightly larger than that of selectin binding. Above this threshold, FimH also alternate between binding, pause and unbinding with the mannose residues. However, different from selectin binding, FimH binding to mannose-BSA can either have a very long or very short pauses. This cause FimH binding to exhibit a "stick-and-roll" adhesion, not rolling adhesion in the case of selectin binding. And unlike selectin binding which requires integrin to help with firm adhesion, FimH binding can become stationary, and this process is reversible. All of this is mediated by shear stress level: at shear stress higher than 20 dynes per squared centimeter, FimH binding is stationary. At shear stress higher than 100 dynes per squared centimeter, slow rolling is observed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gallium maltolate** Gallium maltolate: Gallium maltolate is a coordination complex consisting of a trivalent gallium cation coordinated to three maltolate ligands. The compound is a potential therapeutic agent for cancer, infectious disease, and inflammatory disease. A cosmetic skin cream containing gallium maltolate is marketed under the name Gallixa. It is a colorless solid with significant solubility in both water and lipids (octanol-water partition coefficient = 0.41). Mechanism of action: Gallium maltolate delivers gallium with higher oral bioavailability than that of gallium salts such as gallium nitrate and gallium trichloride. In vitro studies have found gallium to be antiproliferative due primarily to its ability to mimic ferric iron (Fe3+). Ferric iron is essential for DNA synthesis, as it is present in the active site of the enzyme ribonucleotide reductase, which catalyzes the conversion of ribonucleotides to the deoxyribonucleotides required for DNA. Gallium is taken up by the rapidly proliferating cells, but it is not functional for DNA synthesis, so the cells cannot reproduce and they ultimately die by apoptosis. Normally reproducing cells take up little gallium (as is known from gallium scans), and gallium is not incorporated into hemoglobin, accounting for the relatively low toxicity of gallium. Research: Gallium (III) ion shows anti-inflammatory activity in animal models of inflammatory disease. Orally administered gallium maltolate has demonstrated efficacy against two types of induced inflammatory arthritis in rats. Experimental evidence suggests that the anti-inflammatory activity of gallium may be due, at least in part, to down-regulation of pro-inflammatory T-cells and inhibition of inflammatory cytokine secretion by macrophages. Because many iron compounds are pro-inflammatory, the ability of gallium to act as a non-functional iron mimic may contribute to its anti-inflammatory activity.Gallium maltolate has also been proposed for the treatment for primary liver cancer (hepatocellular carcinoma; HCC). In vitro experiments demonstrated efficacy against HCC cell lines, and encouraging clinical results have been reported.Gallium compounds are active against infection-related biofilms, particularly those caused by Pseudomonas aeruginosa. In related research, locally administered gallium maltolate has shown efficacy against P. aeruginosa in a mouse burn/infection model. The potential of this approach may be somewhat limited by the relatively rapid appearance of gallium-resistant isolates.Oral gallium maltolate has been investigated as a treatment for Rhodococcus equi foal pneumonia, a common and often fatal disease of newborn horses. R. equi can also infect humans with AIDS or who are otherwise immunocompromized.Topically applied gallium maltolate has been studied for use in neuropathic pain (severe postherpetic neuralgia and trigeminal neuralgia). It has been hypothesized that any effect on pain may be related to gallium's anti-inflammatory mechanisms, and possibly from its interactions with certain matrix metalloproteinases and substance P, whose activities are zinc-mediated and which have been implicated in the etiology of pain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Network booting** Network booting: Network booting, shortened netboot, is the process of booting a computer from a network rather than a local drive. This method of booting can be used by routers, diskless workstations and centrally managed computers (thin clients) such as public computers at libraries and schools. Network booting can be used to centralize management of disk storage, which supporters claim can result in reduced capital and maintenance costs. It can also be used in cluster computing, in which nodes may not have local disks. In the late 1980s/early 1990s, network boot was used to save the expense of a disk drive, because a decently sized harddisk would still cost thousands of dollars, often equaling the price of the CPU. Hardware support: Contemporary desktop personal computers generally provide an option to boot from the network in their BIOS/UEFI via the Preboot Execution Environment (PXE). Post-1998 PowerPC (G3 – G5) Mac systems can also boot from their New World ROM firmware to a network disk via NetBoot. Old personal computers without network boot firmware support can utilize a floppy disk or flash drive containing software to boot from the network. Process: The initial software to be run is loaded from a server on the network; for IP networks this is usually done using the Trivial File Transfer Protocol (TFTP). The server from which to load the initial software is usually found by broadcasting a Bootstrap Protocol or Dynamic Host Configuration Protocol (DHCP) request. Typically, this initial software is not a full image of the operating system to be loaded, but a small network boot manager program such as PXELINUX which can deploy a boot option menu and then load the full image by invoking the corresponding second-stage bootloader. Installations: Netbooting is also used for unattended operating system installations. In this case, a network-booted helper operating system is used as a platform to execute the script-driven, unattended installation of the intended operating system on the target machine. Implementations of this for Mac OS X and Windows exist as NetInstall and Windows Deployment Services, respectively. Legacy: Before IP became the primary Layer 3 protocol, Novell's NetWare Core Protocol (NCP) and IBM's Remote Initial Program Load (RIPL) were widely used for network booting. Their client implementations also fit into smaller ROM than PXE. Technically network booting can be implemented over any of file transfer or resource sharing protocols, for example, NFS is preferred by BSD variants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adobe Pixel Bender** Adobe Pixel Bender: Adobe Pixel Bender, previously codenamed Hydra, is a programming language created by Adobe Systems for the description of image processing algorithms. The syntax is based on GLSL, and a Pixel Bender program is analogous to an OpenGL fragment shader, and is intended to be a loosely typed version of C++.Adobe Systems' Adobe Pixel Bender Toolkit is the IDE for scripting with Pixel Bender. Pixel Bender programs are intended to be used in a number of Adobe products, and was supported by After Effects (through CS5) and Flash Player. The Pixel Bender Toolkit was bundled with Adobe's Creative Suite, and allowed programs to be created and tested. It is available as a free standalone from Adobe's website.In addition to its primary purpose of image processing, Pixel Bender can also be used for general mathematical operations which would benefit from the hardware acceleration that it provides. An example of this is audio processing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Homework First** Homework First: The Homework First is a combination lock parental control device for the Nintendo Entertainment System made by SafeCare Products, Inc. of Dundee, Illinois and Master Lock. The lock features a "Self-Setting" combination that attaches to the open bay of a front-loading NES-001 system via a screw hole below the cartridge slot which enables the lock to grab the console like a vise to prevent both the insertion of cartridges and the removal of the device. Around 25,000 units were claimed to have been sold. Reception: ACE magazine panned the device on a conceptual level during their 1989 CES coverage.Jeuxvideo.com cited the device as one of the first video game parental controls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NUBP2** NUBP2: Nucleotide-binding protein 2 (NBP 2) also known as cytosolic Fe-S cluster assembly factor NUBP2 is a protein that in humans is encoded by the NUBP2 gene.NUBP2 is a member of the NUBP/MRP gene subfamily of ATP-binding proteins. There are two types in eukaryotes NUBP1 and NUBP2, and one novel human gene that define NBP nucleotide-binding proteins (NUBP/MRP-multidrug resistance-associated protein) in mammalian cells requires the maturation of cytosolic iron-sulfur (Fe/S) proteins as Nubp1 is involved in the formation of extramitochondrial Fe/S proteins the cell division inhibitor MinD is homologous and involve two proteins components of the (FeS) protein assembly machinery closely similar cytosolic soluble P loop NTPase where Nar1 is required for assembly, identified Cfd1p in cytosolic and nuclear Fe/S protein biogenesis in yeast. Nubp proteins NTPase Nbp35p. MinD is homologous to members in MinD of E. coli, a relative of the ParA family. Morphology: Further information: Morphology (biology)NBP35 bacterial plasmids F (the classical Escherichia coli sex factor) is found in all nuclear genes in vegetative and gametic flagella of the unicellular green algae C. reinhardtii and nuclear Fe/S protein biogenesis required for cytosolic iron-sulfur protein assembly; MNP =MRP-like; MRP (Multiple Resistance and pH adaptation) MRP/NBP35-like P-loop NTPase similar to; and functions as minD_arch; cell division ATPase MinD, archaeal and homologue's of NUBP1. The NBP35 gene is conserved in archaea Bacteria, Metazoa, Fungi and other Eukaryotes and with considerable divergence from the yeast; Cfd1-Nbp35 Fe-S to man. In a scaffold complex protein to form large molecular assemblies that store Fe(III) and 4Fe-4S seen as secondary to defects inactivated to accomplish its functions as physiologically relevant form(s) Fe/S proteins Iron regulatory protein 1 (IRP1) is regulated through prevents deficiencies and increased mutation rates that characterized a plant P loop NTPase with sequence similarity to Nbp35 homologue's of NUBP1. Interactions: NUBP2 has been shown to interact with... ACO1 Iron-responsive element-binding protein 1 (IRE-BP 1) (Iron regulatory protein 1) (IRP1) MAPK8IP3 C-jun-amino-terminal kinase-interacting protein 3 (JNK-interacting protein 3) (JIP-3) IGFALS Insulin-like growth factor-binding protein complex acid labile chain precursor (ALS) KIF11 Kinesin-like protein KIF11 (Kinesin-related motor protein Eg5) SEPP1 Selenoprotein P precursor (SeP) CA1 Carbonic anhydrase 1 (EC 4.2.1.1) (Carbonic anhydrase I) (Carbonate dehydratase I) (CA-I)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CLIMAT** CLIMAT: CLIMAT is a code for reporting monthly climatological data assembled at land-based meteorological surface observation sites to data centres. CLIMAT-coded messages contain information on several meteorological variables that are important to monitor characteristics, changes, and variability of climate. Usually these messages are sent and exchanged via the Global Telecommunication System (GTS) of the World Meteorological Organisation (WMO). Modifications of the CLIMAT code are the CLIMAT SHIP and CLIMAT TEMP / CLIMAT TEMP SHIP codes which serve to report monthly climatological data assembled at ocean-based meteorological surface observation sites and at land-/ocean-based meteorological upper-air observation sites, respectively. The monthly values included usually are obtained by averaging observational values of one or several daily observations over the respective month. Contents of CLIMAT (TEMP) (SHIP) messages: CLIMAT-messages contain comprehensive information on a variety of climate-relevant meteorological parameters such as monthly mean temperature, mean daily maximum and minimum temperatures of the month, monthly mean pressure, monthly mean vapour pressure, total precipitation for the month and total sunshine for the month. Information on so-called normal values of these parameters, usually averaged over a period of 30 years for a specific month, can also be transmitted with CLIMAT-messages. Data on extreme values of certain parameters and days of a month with certain parameters exceeding defined thresholds can also be included, as well as information on the number of days of a month where data are missing for a certain parameter. CLIMAT SHIP messages contain information on fewer variables (e.g., total sunshine for the month and extreme values are not included). CLIMAT TEMP (SHIP)-messages contain information on monthly mean temperature, monthly mean geopotential, monthly mean dew-point depression and wind characteristics at specific pressure surfaces. Characteristics of the CLIMAT code: The CLIMAT code has a fixed but logic syntax that needs to be followed strictly to maintain the ability of a computing device that processes the code to assign the contained information correctly. A CLIMAT-coded message can contain information from more than one synoptic stations and the CLIMAT-coded material for each station is called a “CLIMAT report”. A CLIMAT report is basically structured into five so-called sections (sections 0 to 4) which contain different types of information. If a CLIMAT message is transmitted via the Global Telecommunication System, the message is called a “CLIMAT bulletin”, as some extra coding may be added. Future developments: Due to the WMO-led development of the new digital BUFR and CREX coding formats and their implementation to meteorological reporting, CLIMAT coding will continually be transformed in these new formats or even be replaced in the future. Notwithstanding, CLIMAT-based reporting will still play an essential role in obtaining climate information in the upcoming decades, as many national meteorological services will not change to the BUFR format rapidly and climate relevant variables should be monitored uninterruptedly to obtain useful records. The CLIMAT SHIP code is only very seldom used since different, more specific codes for reporting data from fixed ocean locations, such as buoys, exist. Future developments: The monthly climatological upper-air data included in CLIMAT TEMP (SHIP) can basically also be derived from daily reports, due to improvements in collection and exchange of the daily TEMP messages and improved real-time quality control, and therefore discontinuing these messages is currently discussed. Applications: The data of CLIMAT reports are broadly used in meteorological and climatological applications, such as the generation of time-series, climate monitoring and climate modelling. Increasing quality and quantity of CLIMAT reports sent from meteorological observation sites ameliorates the generation of these products. Climate monitoring products generated from CLIMAT messages e.g. are deviations of monthly air temperature compared to the 1961-1990 reference period. Amelioration of messages: WMO and the Global Climate Observing System (GCOS) disseminate information on CLIMAT reporting via handbooks/guides, and the Manual on Codes (WMO No. 306), e.g. via the internet. For simplifying the forming of CLIMAT reports, the WMO World Climate Programme and GCOS have set up a software called “CLIREP” which provides a user interface where data can be inserted and are processed automatically to form a correct CLIMAT message. For the compilation of CLIMAT coded messages a simple text-editor is sufficient as messages can be sent as “.txt”-files. Therefore it is also possible to send CLIMAT messages via email to the GCOS Surface Network (GSN) monitoring centres that monitor and supervise the worldwide CLIMAT reporting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nitrous oxide engine** Nitrous oxide engine: A Nitrous Oxide Engine, or Nitrous Oxide System commonly referred to and known as NOS, is an internal combustion engine in which oxygen for burning the fuel comes from the decomposition of nitrous oxide, N2O, as well as air. The system increases the engine's power output by allowing fuel to be burned at a higher-than-normal rate, because of the higher partial pressure of oxygen injected with the fuel mixture. Nitrous oxide is not flammable at room temperature, it only becomes flammable under extensive pressure. Nitrous injection systems may be "dry", where the nitrous oxide is injected separately from fuel, or "wet" in which additional fuel is carried into the engine along with the nitrous. NOS may not be permitted for street or highway use, depending on local regulations. N2O use is permitted in certain classes of auto racing. Reliable operation of an engine with nitrous injection requires careful attention to the strength of engine components and to the accuracy of the mixing systems, otherwise destructive detonations or exceeding engineered component maximums may occur. Nitrous oxide systems were applied as early as World War II for certain aircraft engines. Terminology: In the context of racing, nitrous oxide is often termed nitrous or NOS. The term NOS is derived from the initials of the company name Nitrous Oxide Systems, Inc. (now a brand of Holley Performance Products) one of the pioneering companies in the development of nitrous oxide injection systems for automotive performance use, and has become a genericized trademark. Nitro is also sometimes used, though incorrect, as it refers more to nitromethane engines. Mechanism: When a mole of nitrous oxide decomposes, it releases half a mole of O2 molecules (oxygen gas), and one mole of N2 molecules (nitrogen gas). This decomposition allows an oxygen concentration of 36.36% to be reached. Nitrogen gas is non-combustible and does not support combustion. Air—which contains only 21% oxygen, the rest being nitrogen and other equally non-combustible and non-combustion-supporting gasses—permits a 12-percent-lower maximum-oxygen level than that of nitrous oxide. This oxygen supports combustion; it combines with fuels such as gasoline, alcohol, diesel fuel, propane, or compressed natural gas (CNG) to produce carbon dioxide and water vapor, along with heat, which causes the former two products of combustion to expand and exert pressure on pistons, driving the engine. Mechanism: Nitrous oxide is stored as a liquid in tanks, but is a gas under atmospheric conditions. When injected as a liquid into an inlet manifold, the vaporization and expansion causes a reduction in air/fuel charge temperature with an associated increase in density, thereby increasing the cylinder's volumetric efficiency. Mechanism: As the decomposition of N2O into oxygen and nitrogen gas is exothermic and thus contributes to a higher temperature in the combustion engine, the decomposition increases engine efficiency and performance, which is directly related to the difference in temperature between the unburned fuel mixture and the hot combustion gasses produced in the cylinders. All systems are based on a single stage kit, but these kits can be used in multiples (called two-, three-, or even four-stage kits). The most advanced systems are controlled by an electronic progressive delivery unit that allows a single kit to perform better than multiple kits can. Most Pro Mod and some Pro Street drag race cars use three stages for additional power, but more and more are switching to pulsed progressive technology. Progressive systems have the advantage of utilizing a larger amount of nitrous (and fuel) to produce even greater power increases as the additional power and torque are gradually introduced (as opposed to being applied to the engine and transmission immediately), reducing the risk of mechanical shock and, consequently, damage. Identification: Cars with nitrous-equipped engines may be identified by the "purge" of the delivery system that most drivers perform prior to reaching the starting line. A separate electrically operated valve is used to release air and gaseous nitrous oxide trapped in the delivery system. This brings liquid nitrous oxide all the way up through the plumbing from the storage tank to the solenoid valve or valves that will release it into the engine's intake tract. When the purge system is activated, one or more plumes of nitrous oxide will be visible for a moment as the liquid flashes to vapor as it is released. The purpose of a nitrous purge is to ensure that the correct amount of nitrous oxide is delivered the moment the system is activated as nitrous and fuel jets are sized to produce correct air / fuel ratios, and as liquid nitrous is denser than gaseous nitrous, any nitrous vapor in the lines will cause the car to "bog" for an instant (as the ratio of nitrous / fuel will be too rich reducing engine power) until liquid nitrous oxide reaches the injection nozzle. Types of nitrous systems: There are two categories of nitrous systems: dry & wet with four main delivery methods of nitrous systems: single nozzle, direct port, plate, and bar used to discharge nitrous into the plenums of the intake manifold. Nearly all nitrous systems use specific orifice inserts, called jets, along with pressure calculations to meter the nitrous, or nitrous and fuel in wet applications, delivered to create a proper air-fuel ratio (AFR) for the additional horsepower desired. Types of nitrous systems: Dry In a dry nitrous system the nitrous delivery method provides nitrous only. The extra fuel required is introduced through the fuel injectors, keeping the manifold dry of fuel. This property is what gives the dry system its name. Fuel flow can be increased either by increasing the pressure or by increasing the time the fuel injectors remain open. Dry nitrous systems typically rely on a single nozzle delivery method, but all of the four main delivery methods can be used in dry applications. Dry systems are not typically used in carbureted applications due to the nature of a carburetor's function and inability to provide large amounts of on-demand fuel. Dry nitrous systems on fuel injected engines will use increased fuel pressure or injector pulsewidth upon system activation as a means of providing the correct ratio of fuel for the nitrous. Types of nitrous systems: Wet In a wet nitrous system the nitrous delivery method provides nitrous and fuel together resulting in the intake manifold being "wet" with fuel, giving the category its name. Wet nitrous systems can be used in all four main delivery methods. Types of nitrous systems: In wet systems on fuel/direct injected engines care must be taken to avoid backfires caused by fuel pooling in the intake tract or manifold and/or uneven distribution of the nitrous/fuel mixture. Port and direct fuel injection engines have intake systems engineered for the delivery of air only, not air and fuel. Since most fuels are heavier than air and not in a gaseous state when used with nitrous systems it does not behave in the same way as air alone; thus the possibility of the fuel being unevenly distributed to the combustion chambers of the engine causing lean conditions/detonation and/or pooling in parts of the intake tract/manifold presenting a dangerous situation in which the fuel may be ignited uncontrollably causing catastrophic failure to components. Carbureted and single point/throttle body injected engines use a wet manifold design that is engineered to evenly distribute fuel and air mixtures to all combustion chambers, making this mostly a non-issue for these applications. Types of nitrous systems: Single nozzle A single nozzle nitrous system introduces the nitrous or fuel/nitrous mixture via a single injection point. The nozzle is typically placed in the intake pipe/tract after the air filter, prior to the intake manifold and/or throttle body in fuel injected applications, and after the throttle body in carbureted applications. In wet systems the high pressures of the nitrous injected causes the aerosolization of the fuel injected in tandem via the nozzle, allowing for more thorough and even distribution of the nitrous/fuel mixture. Types of nitrous systems: Direct port A direct port nitrous system introduces the nitrous or fuel/nitrous mixture as close to the intake ports of the engine as is feasible via individual nozzles directly in each intake runner. Direct port nitrous systems will use the same or similar nozzles as those in single nozzle systems, just in numbers equal to or in multiples of the number of intake ports of the engine. Being that direct port systems do not have to rely on intake tract/manifold design to evenly distribute the nitrous or fuel/nitrous mixture, they are inherently more precise than other delivery methods. The greater number of nozzles also allows a greater total amount of nitrous to be delivered than other systems. Multiple "stages" of nitrous can be accomplished by using multiple sets of nozzles at each intake port to further increase the power potential. Direct port nitrous systems are the most common delivery method in racing applications. Types of nitrous systems: Plate A plate nitrous system uses a spacer placed somewhere between the throttle body and intake ports with holes drilled along its interior surfaces, or in a tube that is suspended from the plate, for the nitrous or fuel/nitrous mixture to be distributed through. Plate systems provide a drill-less solution compared to other delivery methods as the plates are generally application specific and fit between existing components such as the throttle body-to-intake-manifold or upper-intake-manifold-to-lower-intake-manifold junctions. Requiring little more than longer fasteners, plate systems are the most easily reversed systems as they need little to no permanent changes to the intake tract. Dependent on application, plate systems can provide precise nitrous or fuel/nitrous mixture distribution similar to that of direct port systems. Types of nitrous systems: Bar A bar nitrous system utilizes a hollow tube, with a number of holes drilled along its length, placed inside the intake plenum to deliver nitrous. Bar nitrous delivery methods are almost exclusively dry nitrous systems due to the non-optimal fuel distribution possibilities of the bar. Bar nitrous systems are popular with racers that prefer their nitrous use to be hidden, as the nitrous distribution method is not immediately apparent and most associated components of the nitrous system can be obscured from view. Types of nitrous systems: Propane or CNG Nitrous systems can be used with a gaseous fuel such as propane or compressed natural gas. This has the advantage of being technically a dry system as the fuel is not in a liquid state when introduced to the intake tract. Reliability concerns: The use of nitrous oxide carries with it concerns about the reliability and longevity of an engine present with all power adders. Due to the greatly increased cylinder pressures, the engine as a whole is placed under greater stress, primarily those components associated with the engine's rotating assembly. An engine with components unable to cope with the increased stress imposed by the use of nitrous systems can experience major engine damage, such as cracked or destroyed pistons, connecting rods, crankshafts, and/or blocks. Proper strengthening of engine components in addition to accurate and adequate fuel delivery are key to nitrous system use without catastrophic failure. Reliability concerns: In addition, nitrous oxide should not be used in vehicles with an automatic transmission, as the highly increased engine power and torque may cause stress damage to the torque converter and the transmission itself. Street legality: Nitrous oxide injection systems for automobiles are illegal for road use in some countries. For example, in New South Wales, Australia, the Roads & Traffic Authority Code of Practice for Light Vehicle Modifications (in use since 1994) states in clause 3.1.5.7.3 that The use or fitment of nitrous oxide injection systems is not permitted.In Great Britain, there are no restrictions on use of N2O, but the modification must be declared to the insurance company, which is likely to result in a higher premium for Motor Vehicle insurance or refusal to insure. Street legality: In Germany, despite its strict TÜV rules, a nitrous system can be installed and used legally in a street driven car. The requirements for the technical standard of the system are similar to those of aftermarket natural gas conversions. Racing rules: Several sanctioning bodies in drag racing allow or disallow the use of nitrous oxide in certain classes or have nitrous oxide specific classes. Nitrous is allowed in Formula Drift competition. History: A similar basic technique was used during World War II by Luftwaffe aircraft with the GM-1 system to maintain the power output of aircraft engines when at high altitude where the air density is lower. Accordingly, it was only used by specialized planes like high-altitude reconnaissance aircraft, high-speed bombers and high-altitude interceptors. It was sometimes used with the Luftwaffe's form of methanol-water injection, designated MW 50 (both meant as Notleistung short-term power boosting measures), to produce substantial increases in performance for fighter aircraft over short periods of time, as with their combined use on the Focke-Wulf Ta 152H fighter prototypes.British World War II usage of nitrous oxide injector systems were modifications of Merlin engines carried out by the Heston Aircraft Company for use in certain night fighter variants of the de Havilland Mosquito and PR versions of the Supermarine Spitfire.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kunsthalle** Kunsthalle: A kunsthalle is a facility that mounts temporary art exhibitions, similar to an art gallery. It is distinct from an art museum by not having a permanent collection. In the German-speaking regions of Europe, Kunsthallen are often operated by a non-profit Kunstverein ("art association" or "art society"), and have associated artists, symposia, studios and workshops. They are sometimes called a Kunsthaus. Origin, spelling and variants: The term kunsthalle is a loanword from the German Kunsthalle, a compound noun formed by combining the two nouns Kunst (art) and Halle (hall). Origin, spelling and variants: Like all nouns in German, the word is written with an initial capital letter. In English, it should be written with a lower-case letter (kunsthalle) unless it is the first word of a sentence or part of a title. The plural form Kunsthallen is usually rendered as kunsthalles.The term is translated as kunsthal in Danish, kunsthal in Dutch, kunstihoone in Estonian, taidehalli in Finnish, kunsthall in Norwegian and konsthall in Swedish. List of kunsthalles: This list contains the exhibition venues, museums, and art societies that can be considered as kunsthalles. List of kunsthalles: Austria Kunsthaus Graz, Graz Kunsthalle Krems (foundation) Kunsthalle Wien; see also Museumsquartier, Vienna (municipal) KunstHausWien, Vienna Belgium Kunsthal Gent, Ghent Kunsthalle Lophem, Loppem Kunsthal Extra City, Antwerp Czech Republic Kunsthalle Praha, Prague Denmark Kunsthal Aarhus, Aarhus Kunsthal Charlottenborg, Copenhagen Nikolaj Kunsthal, (previously known as Kunsthallen Nikolaj), Copenhagen Estonia Tallinn Art Hall, Tallinn (Tallinna Kunstihoone) Finland Kunsthalle Helsinki, Helsinki (Helsingin Taidehalli) Kunsthalle Kohta, Helsinki (Kohta Taidehalli) Kunsthalle Turku, Turku (Turun Taidehalli) France La Kunsthalle Mulhouse, Alsace Château de Montsoreau-Museum of Contemporary Art, Montsoreau Georgia Kunsthalle Tbilisi, Tbilisi Germany Kunsthalle Baden-Baden (state-run) Kunsthalle Bielefeld — with permanent collection (municipal) Kunsthalle Bonn (German federal) Kunsthalle Bremen — with a permanent collection (Kunstverein in Bremen) Kunsthalle Bremerhaven (Kunstverein Bremerhaven) Kunsthalle Darmstadt (Kunstverein Darmstadt) Kunsthalle Düsseldorf (municipal) Kunsthalle in Emden — with permanent collection (foundation) Kunsthalle Erfurt (municipal/Erfurter Kunstverein) Schirn Kunsthalle Frankfurt, Frankfurt (municipal) Kunsthalle Hamburg — with permanent collection, see Hamburger Kunsthalle (state-run) Kunsthalle Göppingen (municipal/Kunstverein Göppingen) Kunsthalle Karlsruhe — with permanent collection (state-run) Kunsthalle Fridericianum Kassel, Fridericianum (municipal) Kunsthalle zu Kiel — with permanent collection (state-run) Kunsthalle Königsberg, now a market in Kaliningrad Kunsthalle der Sparkasse Leipzig (foundation) Kunsthalle Kunstverein Lingen (Kunstverein Lingen) Kunsthalle Mainz, Mainz Kunsthalle Mannheim — with permanent collection (municipal) Kunsthalle Münster, Münster Kunsthalle Nürnberg (municipal) Kunsthalle Rostock, Rostock Kunsthaus Tacheles, Berlin Kunsthalle Tübingen — with permanent collection (municipal/foundation) Kunsthalle Wilhelmshaven (municipal/Verein der Kunstfreunde Wilhelmshaven) Italy AnonimaKunsthalle, Varese Kunsthalle Bozen, Bolzano Kunsthalle Meran, Merano Netherlands Kunsthal KAdE, Amersfoort Kunsthal Rotterdam, Rotterdam Norway Bergen Kunsthall, Bergen Kunsthall Oslo, Oslo Kunsthall Stavanger, Stavanger Kunsthall Trondheim, Trondheim Poland Kunsthalle Breslau/Wrocław Kunsthalle Danzig/Gdańsk Portugal Kunsthalle Lissabon, Lisbon, Portugal Romania Kunsthalle Bega/Timișoara Sweden Switzerland Kunsthalle Arbon Kunsthalle Basel (Basler Kunstverein) Kunsthalle Bern (Verein der Kunsthalle Bern) Fri Art Kunsthalle, Fribourg Neue Kunst Halle St. Gallen (foundation) Kunsthalle Zürich (municipal/Verein Kunsthalle Zürich) Kunsthaus Zürich United States New Museum, New York City, New York Aspen Art Museum, Aspen, Colorado Institute of Contemporary Art Archived 2020-09-30 at the Wayback Machine, San Jose, California MassArt Art Museum, Boston, Massachusetts Kunsthalle Detroit, Michigan Contemporary Arts Museum Houston, Texas Portsmouth Museum of Art Dallas Contemporary Texas MOCA Ohio Institute of Contemporary Art, Philadelphia, Pennsylvania The Renaissance Society at the University of Chicago Contemporary Art Museum St. Louis Center for Maine Contemporary Art, Rockland, Maine Blaffer Art Museum University of Houston, Texas Moss Arts Center Virginia Tech, Blacksburg, Virginia Sarasota Art Museum Ringling College of Art and Design, Sarasota, FL Other countries Kunsthalle Budapest, Budapest, Hungary Kunsthalle Praha, Prague, Czechia Kunsthalle Košice, Košice (German: Kaschau), Slovakia Kunsthalle Bratislava, Bratislava, Slovakia
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pyraminx Crystal** Pyraminx Crystal: The Pyraminx Crystal (also called a Chrysanthemum puzzle) is a dodecahedral puzzle similar to the Rubik's Cube and the Megaminx. It is manufactured by Uwe Mèffert and has been sold in his puzzle shop since 2008. The puzzle was originally called the Brilic, and was first made in 2006 by Aleh Hladzilin, a member of the Twisty Puzzles Forum. It is not to be confused with the Pyraminx, which is also invented and sold by Meffert. History: The Pyraminx Crystal was patented in Europe on July 16, 1987. The patent number is DE8707783U. In late 2007, due to requests by puzzle fans worldwide, Uwe Mèffert began manufacturing the puzzle. The puzzles were first shipped in February 2008. There are two 12-color versions, one with the black body commonly used for the Rubik's Cube and its variations, and one with a white body. The puzzle company QJ started manufacturing this puzzle in 2010, leading Meffert's Puzzles to file a lawsuit against QJ.The Pyraminx Crystal ran out of stock fairly quickly, and became a collector's puzzle. In October 2011, a new set was created with some slight improvements to the quality. Description: The puzzle consists of a dodecahedron sliced in such a way that each slice cuts through the centers of five different pentagonal faces. This cuts the puzzle into 20 corner pieces and 30 edge pieces, with 50 pieces in total. Each face consists of five corners and five edges. When a face is turned, these pieces and five additional edges move with it. Each corner is shared by 3 faces, and each edge is shared by 2 faces. By alternately rotating adjacent faces, the pieces may be permuted. The goal of the puzzle is to scramble the colors, and then return it to its original state. Solutions: The puzzle is essentially a deeper-cut version of the Megaminx, and the same algorithms used for solving the Megaminx's corners may be used to solve the corners on the Pyraminx Crystal. The edge pieces can then be permuted by a simple 4-twist algorithm, R L' R' L, that leaves the corners untouched, in a manner similar to the Pyraminx. This can be applied repeatedly until the edges are solved. Number of combinations: There are 30 edge pieces with 2 orientations each, and 20 corner pieces with 3 orientations each, giving a maximum of 30!·230·20!·320 possible combinations. However, this limit is not reached because: Only even permutations of edges are possible, reducing the possible edge arrangements to 30!/2. The orientation of the last edge is determined by the orientation of the other edges, reducing the number of edge orientations to 229. Only even permutations of corners are possible, reducing the possible corner arrangements to 20!/2. The orientation of the last corner is determined by the orientation of the other corners, reducing the number of corner combinations to 319. Number of combinations: The orientation of the puzzle does not matter (since there are no fixed face centers to serve as reference points), dividing the final total by 60. There are 60 possible positions and orientations of the first corner, but all of them are equivalent because of the lack of face centers.This gives a total of 30 27 20 19 60 1.68 10 66 possible combinations. Number of combinations: The full figure is 1 677 826 942 558 722 452 041 933 871 894 091 752 811 468 606 850 329 477 120 000 000 000 (roughly 1.68 unvigintillion on the short scale or 1.68 undecillion on the long scale).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pony ride** Pony ride: A pony ride is an opportunity for children to ride real ponies for a short time, usually seen at fairs, guest ranches, zoos, summer camps, private children's parties and similar places. Children on pony rides do not handle the pony themselves, but they need to be old enough to sit up straight and hold their head up without support. Pony rides may be given on individually hand-led ponies, or in a group of ponies, usually four to six, placed on a "pony wheel," a small type of hot walker that leads all ponies in a walk on a small circle so that fewer handlers are needed. Safety is a paramount concern and insurance companies consider pony rides to be a high-risk activity. There are concerns about the welfare of some ponies used for pony rides. Types of ponies: Ponies for younger children generally are under 14 hands (56 inches, 142 cm), and often much smaller. A rule of thumb is that the legs of the child should reach at least halfway down the sides of the pony. The Shetland pony is a breed often used for pony rides. Best practices advise that ponies be at least 4+1⁄2 years old. Stallions are not appropriate for pony rides, and when mares are used, they should not work while they are in heat. Children ages and sizes: Recommendations vary with the size of the pony, but children who participate in pony rides need to be able to sit up and hold their head up without support, thus children under the age of one are too small to safely ride ponies. Best practices are that children be at least three years old, but some reputable programs accept children age two and up. Maximum size of riders usually correlates to the size of the pony, but standards range from under 80 pounds (36 kg) to about 100 pounds (45 kg). Weight, not age, usually limits the biggest riders, but some programs require participants to be no older than 12. Safety: Pony rides are considered a high risk equine activity. Pony ride operators are generally advised to carry liability insurance and to hire staff who are experienced with horses. Equestrian helmets are mandated by law for children in some places, and their use for all children is considered a best practice. Staff should have first aid certification and be covered by workers' compensation insurance.The safest method for in-hand pony rides is to have two people with each child, one on either side of the pony, similar to the methods used for therapeutic horseback riding. Trained staff should handle the pony, help the child get on and off the pony, and be sure equipment is properly adjusted. Most parents should not be asked to handle the pony, because parents usually lack horse experience and knowledge. That said, where a second person is used as a "spotter" to help balance the child, a parent can fill that role, so long as they are healthy enough to keep up with the pony and able to remain calm around the animal.A pony wheel eliminates the need for a separate person to lead each pony, but a parent or other spotter can still walk beside the animal to help steady the child. Other methods of controlling the pony with a child on board, such as ponying from another horse or riding double are generally considered unsafe.Modern standards state that children are never to be belted or strapped onto ponies. It was once common for children to be belted to the saddle by velcro or leather straps on pony wheel rides, though this was never considered a safe practice for in-hand pony rides. Safety studies conducted in 1999 led to recommendations that children not be belted onto ponies in any setting. Stirrups, when used, need to be adjusted to fit each child. Safety: Enclosures Pony rides need to be conducted in an enclosed area to help contain a pony that might escape and to give the pony a visual boundary. Low or flimsy fencing is not a best practice. Welded pipe panels are considered safe for portable fencing, such as at fairs. Settings with permanent pony rides that put up wooden fences need rails or planks to be placed on the inside of the fenceposts so that children do not hit their legs and feet on the posts. Training and equipment: Ponies used for pony rides need to be quiet, well-trained, and desensitized to children, noise, and crowds. At fairs in particular, ponies are exposed to loud noises and traffic.Ponies are usually given western saddles for children's rides because they are less likely to slip and children can hang onto the saddle horn. Straps or loops should not be added to the saddle because children's hands can be caught in them. Saddles need to be properly fitted to the pony for its welfare and comfort. Stirrups, when used, should be wider than for regular riding to help prevent children's feet from getting caught, particularly because many children who take pony rides are wearing sneakers instead of boots. Tapaderos over the stirrups can help prevent a foot from going all the say through the stirrup and getting trapped, but only if properly designed so a child's foot doesn't get wedged between the tapadero and the front of the stirrup.To protect the pony's mouth, and because ponies are led rather than having the child control the pony directly, a halter or caveson is used, rather than a bit and bridle. Side reins are not advised for pony rides. On hand-led rides, leading with a dog obedience chain added around the nose as a lead shank for safety and extra control is recommended.Pony wheel rides are also sometimes called "carousel" rides. Pony wheels are often custom-manufactured. The largest pony wheels can accommodate up to 11 or 12 ponies, but most accommodate four to six. Training and equipment: Hand-led rides can be held in an area about 40 by 80 feet (12 m × 24 m), which is large enough to move around, but confines the pony in case of problems. For hand-led rides, a mounting block or ramp can be used to help children get on and off the pony. Pony welfare and law: The United States Department of Agriculture mandates that carnivals that exhibit animals, roadside zoos and many similar programs be licensed or registered to operate under the Animal Welfare Act of 1966, 7 U.S.C. § 2131 et seq., which protects animals not raised for food or fiber. The Act requires that animals have "adequate housing, sanitation, nutrition, water and veterinary care, and ... [protection] from extreme weather and temperatures." There also has to be an adequate number of handlers. While horse and pony rides can sometimes be exempt, because equines are "farm animals" under 9 CFR §1.1, and exhibitors at fairs and horse shows do not fall within the regulatory definitions, if they are part of a petting zoo or carnival, they fall under the statute.Care for working ponies includes using fly spray in the summer and providing regular access to water. There should be good footing for the ponies, such as sand or shavings brought in to put on top of pavement, but a clay lot or grassy area can also be used. Children need basic instructions to not scream or poke at the animals. Providing instruction for children to sit up straight and how to hold their legs is a best practice.Some animal rights advocates oppose pony rides, suggesting that a merry-go-round is an acceptable substitute. The official position of the American Society for the Prevention of Cruelty to Animals is "The ASPCA is opposed to the cruelty that is inherent in ... attractions such as elephant rides, camel rides, and llama and pony rides that either stand alone or are attached to [petting zoos]." Concerns of animal rights and animal welfare advocates generally concern ponies being subjected to harassment from the public, not getting enough water, and lack of rest. Sometimes there is also criticism that ponies are overfed and obese.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**UBAP2** UBAP2: Ubiquitin-associated protein 2 is a protein that in humans is encoded by the UBAP2 gene. Function: This gene is a novel gene isolated based on its expression in the human adrenal gland. The full-length protein encoded by this gene contains a UBA-domain (ubiquitin associated domain), which is a motif found in several proteins having connections to ubiquitin and the ubiquitination pathway. In addition, the protein contains a region similar to a domain found in members of the atrophin-1 family. The function of this protein has not been determined. Additional alternate splice variants may exist, but their full length nature has not been determined.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Just-in-time learning** Just-in-time learning: Just-in-time learning is an approach to individual or organizational learning and development that promotes need-related training be readily available exactly when and how it is needed by the learner. Methodology: Just-in-time learning is different from structured training or scheduled professional development, both of which are generally available at set dates and times. What makes just-in-time learning unique is a strategy focused on meeting the learner's need when it arises, rather than pre-scheduled education sessions that occur regardless of the immediacy or scope of need. Therefore, planning for just-in-time learning requires anticipating what is needed by the various learners, when and where they may be when they experience the need, and the creation of content oriented toward meeting those needs in ways that are focused and accessible.The learning that is provided in a just-in-time format is often by short online videos, targeted elearning, printed and accessible job aids, or related real-world information. It is timed and packaged to meet one explicit need and nothing else, so as not to overwhelm the learner with anything that does not meet the immediate need. Information can be provided through traditional paper, online, or through mobile devices depending upon need and availability. It is essential that the information is findable and understandable by the person who needs it; otherwise the person will become distracted or lose focus and defeats the benefits of just-in-time learning. Meeting only the immediate need helps with knowledge retention and promotes feelings of empowerment. Therefore, one of the criteria used to assess learning is the speed of connecting the person who needs something with the learning that helps get it done. Success criteria: That just-in-time learning is often conflated with reusable learning objects implies that similar success criteria may be applied to them. Evidence of successful use of just-in-time learning includes higher learner satisfaction, decreased costs, and even increased patient-centered outcomes when implemented within health settings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interpenetrating polymer network** Interpenetrating polymer network: An Interpenetrating polymer network (IPN) is a polymer comprising two or more networks which are at least partially interlaced on a polymer scale but not covalently bonded to each other. The network cannot be separated unless chemical bonds are broken. The two or more networks can be envisioned to be entangled in such a way that they are concatenated and cannot be pulled apart, but not bonded to each other by any chemical bond. Simply mixing two or more polymers does not create an interpenetrating polymer network (polymer blend), nor does creating a polymer network out of more than one kind of monomers which are bonded to each other to form one network (heteropolymer or copolymer). There are semi-interpenetrating polymer networks (SIPN) and pseudo-interpenetrating polymer networks.To prepare IPNs and SIPNs, the different components are formed simultaneously or sequentially. History: The first known IPN was a combination of phenol-formaldehyde resin with vulcanized natural rubber made by Jonas Aylsworth in 1914. However, this was before Staudinger's hypothesis on macromolecules and thus the terms "polymer" or "IPN" were not yet used. The first usage of the term "interpenetrating polymer networks" was first introduced by J.R. Millar in 1960 while discussing networks of sulfonated and unsulfonated styrene–divinylbenzene copolymers. Mechanical Properties: Molecular intermixing tends to broaden the glass transition regions of some IPN materials compared to their component polymers. This unique characteristic provides excellent mechanical damping properties over a wide range of temperatures and frequencies due to a relatively constant and high phase angle. In IPNs composed of both rubbery and glassy polymers, considerable toughening is observed compared to the constituent polymers. When the glassy component forms a discrete, discontinuous phase, the elastomeric nature of the continuous rubbery phase can be preserved while increasing the overall toughness of the material and its elongation at break. On the other hand, when the glassy polymer forms a bicontinuous phase within the rubbery network, the IPN material can behave like an impact-resistant plastic. Morphology: Most IPNs do not interpenetrate completely on a molecular scale, but rather form small dispersed or bicontinuous phase morphologies with characteristic length scales on the order of tens of nanometers. However, since these length scales are relatively small, they are often considered homogeneous on a macroscopic scale. The characteristic lengths associated with these domains often scale with the length of chains between crosslinks, and thus the morphology of the phases is often dictated by the crosslinking density of the constituent networks. The kinetics of phase separation in IPNs can arise from both nucleation and growth and spinodal decomposition mechanisms, with the former producing discrete phases akin to dispersed spheres and the latter forming bicontinuous phases akin to interconnected cylinders. Contrary to many typical phase separation processes, coarsening, where the length scale of the phases tends to increase over time, can be impeded by the formation of crosslinks in either network. Furthermore, IPNs are often able to maintain these complex morphologies over long periods of time compared to what could be achieved by simple polymer blends. Applications: IPNs have been used in automotive parts (including modern automotive paint), damping materials, medical devices, molding compounds, and in engineering plastics. While many benefits come from the enhanced mechanical properties of the IPN materials, other characteristics such as resistance to solvent swelling can also make IPNs a material of commercial interest. More recent applications and areas of research for IPNs include uses in drug delivery systems, energy storage materials, and tissue engineering.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Service Availability Forum** Service Availability Forum: The Service Availability Forum (SAF or SA Forum) is a consortium that develops, publishes, educates on and promotes open specifications for carrier-grade and mission-critical systems. Formed in 2001, it promotes development and deployment of commercial off-the-shelf (COTS) technology. Description: Service availability is an extension of high availability, referring to services that are available regardless of hardware, software or user fault and importance. Description: Key principles of service availability: Redundancy – "backup" capability in case of need to failover due to a fault Stateful and seamless recovery from failures Minimization of mean time to repair (MTTR) – time to restore service after an outage Fault prediction & avoidance – take action before something failsThe traditional definitions of high availability have their roots in hardware systems where redundancy of equipment was the primary mechanism for achieving uptime over a specific period. As software has come to dominate the landscape, the probability of failure is often much higher for applications than it is for hardware and so these concepts have been extended encompass an overall view of service availability where downtime, irrespective of its cause, is an exceptionally rare event. Services and applications should always be available, whether it is during abnormal system operation, scheduled maintenance, or software upgrade, for example. Description: SA Forum support commercial off-the-shelf (COTS) technology for uninterrupted service availability, application portability and seamless integration. Collaborating industry organizations include the following: CP-TA (Communications Platforms Trade Association): ensure interoperability on xTCA platforms. PICMG (PCI Industrial Computer Manufacturers Group): develop open specifications that adapt PCI technology for use in high-performance telecommunications and industrial computing applications. SCOPE Alliance: enable and promote the availability of open carrier grade base platforms based on COTS hardware / software and Free and open-source software (FOSS) building blocks, and to promote interoperability between such components. The Linux Foundation: promote, protect, and standardize Linux by providing unified resources and services needed for open source to successfully compete with closed platforms. Specifications Specifications for carrier-grade service availability include: Hardware Platform Interface (HPI) Application Interface Specification (AIS) Mapping SpecificationsJava Mapping Specifications HPI-to-AdvancedTCA Mapping Specifications Educational resources The SA Forum free educational materials enable self-guided training the SA Forum specifications: Application Webcasts Tutorials Whitepapers
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quinupristin** Quinupristin: Quinupristin is a streptogramin B antibiotic, used in combination with dalfopristin under the trade name Synercid. It has activity against Gram-positive and atypical bacteria but not Gram-negative bacteria. It inhibits bacterial protein synthesis. The combination of quinupristin and dalfopristin is not active against Enterococcus faecalis and needs to be given in combination with other antibacterials for mixed infections that involve Gram-negative organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volcanic plug** Volcanic plug: A volcanic plug, also called a volcanic neck or lava neck, is a volcanic object created when magma hardens within a vent on an active volcano. When present, a plug can cause an extreme build-up of high gas pressure if rising volatile-charged magma is trapped beneath it, and this can sometimes lead to an explosive eruption. In a plinian eruption the plug is destroyed and ash is ejected.Glacial erosion can lead to exposure of the plug on one side, while a long slope of material remains on the opposite side. Such landforms are called crag and tail. If a plug is preserved, erosion may remove the surrounding rock while the erosion-resistant plug remains, producing a distinctive upstanding landform. Examples of volcanic plugs: Africa Near the village of Rhumsiki in the Far North Province of Cameroon, Kapsiki Peak is an example of a volcanic plug and is one of the most photographed parts of the Mandara Mountains. Spectacular volcanic plugs are present in the center of La Gomera island in the Canary Islands archipelago, within the Garajonay National Park. Europe Borgarvirki is a volcanic plug located in north Iceland. A volcanic plug is situated in the town of Motta Sant'Anastasia in Italy. Examples of volcanic plugs: Saint Michel d'Aiguilhe chapel, whose construction started in 969, near Le Puy-en-Velay in France. The volcanic plug rises about 85 metres (279 ft) above the surroundings. Another building on a volcanic plug is the 14th century Trosky Castle in the Czech Republic. Strombolicchio, the northernmost of the Aeolian Islands, and Rockall, a small, uninhabited, remote islet in the North Atlantic Ocean, are also volcanic plugs. Examples of volcanic plugs: In the United Kingdom, two examples of a building on a volcanic plug are the Castle Rock in Edinburgh, Scotland, and Deganwy Castle, Wales. The Law, Dundee, Ailsa Craig, Bass Rock, North Berwick Law and Dumgoyne hill are other examples of volcanic plugs located in Scotland. There are over 30 volcanic plugs in Northern Ireland, including Slemish in Ballymena, Tievebulliagh, Scawt Hill, Carrickarede, Scrabo and Slieve Gallion. Examples of volcanic plugs: North America and the Caribbean There are several volcanic plugs in the United States, including Morro Rock in California, Devils Elbow located in the Heceta Head Lighthouse Scenic State Park on the Oregon coast, Thumb Butte in the Sierra Prieta of Arizona, and Shiprock in New Mexico. Devils Tower in Wyoming and Little Devils Postpile in Yosemite National Park, California, are also believed, by many geologists, to be volcanic plugs. In Canada, the Northern Cordilleran Volcanic Province gives rise to several confirmed and suspected plugs. Chief among these is Castle Rock, located in British Columbia, which last erupted during the Pleistocene. The southern coast of Saint Lucia is dominated by the iconic Pitons, a UNESCO World Heritage Site. The twin peaks, Gros Piton and Petit Piton, steeply rise more than 770 metres (2,530 ft) above the Caribbean. Examples of volcanic plugs: Oceania There are several volcanic plugs in the North Island of New Zealand, including: the Pinnacles in the Coromandel Peninsula Bream Head in Northland Paritutu and the adjacent Sugar Loaf Islands in Taranaki St. Paul's Rock at Whangaroa Harbour Piha's Lion Rock, which hosted a fortified Maori pa. Mount Pohaturoa near the village of Atiamuri, a distinctive sight for travelers along State Highway 1In New Zealand's South Island, Onawe Peninsula on Banks Peninsula is a prominent volcanic plug, and erosion of Saddle Hill near Dunedin has also revealed a plug. Dunedin's Mount Cargill displays two plugs: its main summit and the subsidiary summit of Buttar's Peak. In Australia, The Nut in Tasmania are further examples, along with Mount Warning and the several peaks in the Warrumbungles in New South Wales. The 11 peaks of the Glasshouse Mountains National Park including Mount Beerwah, Mount Tibrogargan, Mount Coonowrin, Mount Cooroora, Mount Ngungun, Mount Tibberoowuccum, Mount Tunbubudla, and Mount Beerburrum, in South East Queensland are volcanic plugs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bayesian econometrics** Bayesian econometrics: Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation. The Bayesian principle relies on Bayes' theorem which states that the probability of B conditional on A is the ratio of joint probability of A and B divided by probability of B. Bayesian econometricians assume that coefficients in the model have prior distributions. This approach was first propagated by Arnold Zellner. Basics: Subjective probabilities have to satisfy the standard axioms of probability theory if one wishes to avoid losing a bet regardless of the outcome. Before the data is observed, the parameter θ is regarded as an unknown quantity and thus random variable, which is assigned a prior distribution π(θ) with 0≤θ≤1 . Bayesian analysis concentrates on the inference of the posterior distribution π(θ|y) , i.e. the distribution of the random variable θ conditional on the observation of the discrete data y . The posterior density function π(θ|y) can be computed based on Bayes' Theorem: π(θ|y)=p(y|θ)π(θ)p(y) where p(y)=∫p(y|θ)π(θ)dθ , yielding a normalized probability function. For continuous data y , this corresponds to: π(θ|y)=f(y|θ)π(θ)f(y) where f(y)=∫f(y|θ)π(θ)dθ and which is the centerpiece of Bayesian statistics and econometrics. It has the following components: π(θ|y) : the posterior density function of θ|y ;f(y|θ) : the likelihood function, i.e. the density function for the observed data y when the parameter value is θ ;π(θ) : the prior distribution of θ ;f(y) : the probability density function of y .The posterior function is given by π(θ|y)∝f(y|θ)π(θ) , i.e., the posterior function is proportional to the product of the likelihood function and the prior distribution, and can be understood as a method of updating information, with the difference between π(θ) and π(θ|y) being the information gain concerning θ after observing new data. The choice of the prior distribution is used to impose restrictions on θ , e.g. 0≤θ≤1 , with the beta distribution as a common choice due to (i) being defined between 0 and 1, (ii) being able to produce a variety of shapes, and (iii) yielding a posterior distribution of the standard form if combined with the likelihood function θΣyi(1−θ)n−Σyi . Based on the properties of the beta distribution, an ever-larger sample size implies that the mean of the posterior distribution approximates the maximum likelihood estimator y¯. Basics: The assumed form of the likelihood function is part of the prior information and has to be justified. Different distributional assumptions can be compared using posterior odds ratios if a priori grounds fail to provide a clear choice. Commonly assumed forms include the beta distribution, the gamma distribution, and the uniform distribution, among others. If the model contains multiple parameters, the parameter can be redefined as a vector. Applying probability theory to that vector of parameters yields the marginal and conditional distributions of individual parameters or parameter groups. If data generation is sequential, Bayesian principles imply that the posterior distribution for the parameter based on new evidence will be proportional to the product of the likelihood for the new data, given previous data and the parameter, and the posterior distribution for the parameter, given the old data, which provides an intuitive way of allowing new information to influence beliefs about a parameter through Bayesian updating. If the sample size is large, (i) the prior distribution plays a relatively small role in determining the posterior distribution, (ii) the posterior distribution converges to a degenerate distribution at the true value of the parameter, and (iii) the posterior distribution is approximately normally distributed with mean θ^ History: The ideas underlying Bayesian statistics were developed by Rev. Thomas Bayes during the 18th century and later expanded by Pierre-Simon Laplace. As early as 1950, the potential of the Bayesian inference in econometrics was recognized by Jacob Marschak. The Bayesian approach was first applied to econometrics in the early 1960s by W. D. Fisher, Jacques Drèze, Clifford Hildreth, Thomas J. Rothenberg, George Tiao, and Arnold Zellner. The central motivation behind these early endeavors in Bayesian econometrics was the combination of the parameter estimators with available uncertain information on the model parameters that was not included in a given model formulation. From the mid-1960s to the mid-1970s, the reformulation of econometric techniques along Bayesian principles under the traditional structural approach dominated the research agenda, with Zellner's An Introduction to Bayesian Inference in Econometrics in 1971 as one of its highlights, and thus closely followed the work of frequentist econometrics. Therein, the main technical issues were the difficulty of specifying prior densities without losing either economic interpretation or mathematical tractability and the difficulty of integral calculation in the context of density functions. The result of the Bayesian reformulation program was to highlight the fragility of structural models to uncertain specification. This fragility came to motivate the work of Edward Leamer, who emphatically criticized modelers' tendency to indulge in "post-data model construction" and consequently developed a method of economic modelling based on the selection of regression models according to the types of prior density specification in order to identify the prior structures underlying modelers' working rules in model selection explicitly. Bayesian econometrics also became attractive to Christopher Sims' attempt to move from structural modeling to VAR modeling due to its explicit probability specification of parameter restrictions. Driven by the rapid growth of computing capacities from the mid-1980s on, the application of Markov chain Monte Carlo simulation to statistical and econometric models, first performed in the early 1990s, enabled Bayesian analysis to drastically increase its influence in economics and econometrics. Current research topics: Since the beginning of the 21st century, research in Bayesian econometrics has concentrated on: sampling methods suitable for parallelization and GPU calculations; complex economic models accounting for nonlinear effects and complete predictive densities; analysis of implied model features and decision analysis; incorporation of model incompleteness in econometric analysis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Point-to-Point Tunneling Protocol** Point-to-Point Tunneling Protocol: The Point-to-Point Tunneling Protocol (PPTP) is an obsolete method for implementing virtual private networks. PPTP has many well known security issues. Point-to-Point Tunneling Protocol: PPTP uses a TCP control channel and a Generic Routing Encapsulation tunnel to encapsulate PPP packets. Many modern VPNs use various forms of UDP for this same functionality. The PPTP specification does not describe encryption or authentication features and relies on the Point-to-Point Protocol being tunneled to implement any and all security functionalities. The PPTP implementation that ships with the Microsoft Windows product families implements various levels of authentication and encryption natively as standard features of the Windows PPTP stack. The intended use of this protocol is to provide security levels and remote access levels comparable with typical VPN products. History: A specification for PPTP was published in July 1999 as RFC 2637 and was developed by a vendor consortium formed by Microsoft, Ascend Communications (today part of Nokia), 3Com, and others. PPTP has not been proposed nor ratified as a standard by the Internet Engineering Task Force. Description: A PPTP tunnel is instantiated by communication to the peer on TCP port 1723. This TCP connection is then used to initiate and manage a GRE tunnel to the same peer. The PPTP GRE packet format is non standard, including a new acknowledgement number field replacing the typical routing field in the GRE header. However, as in a normal GRE connection, those modified GRE packets are directly encapsulated into IP packets, and seen as IP protocol number 47. The GRE tunnel is used to carry encapsulated PPP packets, allowing the tunnelling of any protocols that can be carried within PPP, including IP, NetBEUI and IPX. Description: In the Microsoft implementation, the tunneled PPP traffic can be authenticated with PAP, CHAP, MS-CHAP v1/v2 . Security: PPTP has been the subject of many security analyses and serious security vulnerabilities have been found in the protocol. The known vulnerabilities relate to the underlying PPP authentication protocols used, the design of the MPPE protocol as well as the integration between MPPE and PPP authentication for session key establishment.A summary of these vulnerabilities is below: MS-CHAP-v1 is fundamentally insecure. Tools exist to trivially extract the NT Password hashes from a captured MSCHAP-v1 exchange. Security: When using MS-CHAP-v1, MPPE uses the same RC4 session key for encryption in both directions of the communication flow. This can be cryptanalysed with standard methods by XORing the streams from each direction together. MS-CHAP-v2 is vulnerable to dictionary attacks on the captured challenge response packets. Tools exist to perform this process rapidly. In 2012, it was demonstrated that the complexity of a brute-force attack on a MS-CHAP-v2 key is equivalent to a brute-force attack on a single DES key. An online service was also demonstrated which is capable of decrypting a MS-CHAP-v2 MD4 passphrase in 23 hours. Security: MPPE uses the RC4 stream cipher for encryption. There is no method for authentication of the ciphertext stream and therefore the ciphertext is vulnerable to a bit-flipping attack. An attacker could modify the stream in transit and adjust single bits to change the output stream without possibility of detection. These bit flips may be detected by the protocols themselves through checksums or other means.EAP-TLS is seen as the superior authentication choice for PPTP; however, it requires implementation of a public-key infrastructure for both client and server certificates. As such, it may not be a viable authentication option for some remote access installations. Most networks that use PPTP have to apply additional security measures or be deemed completely inappropriate for the modern internet environment. At the same time, doing so means negating the aforementioned benefits of the protocol to some point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**USB flash drive security** USB flash drive security: Secure USB flash drives protect the data stored on them from access by unauthorized users. USB flash drive products have been on the market since 2000, and their use is increasing exponentially. As both consumers and businesses have increased demand for these drives, manufacturers are producing faster devices with greater data storage capacities. An increasing number of portable devices are used in business, such as laptops, notebooks, personal digital assistants (PDA), smartphones, USB flash drives and other mobile devices. USB flash drive security: Companies in particular are at risk when sensitive data are stored on unsecured USB flash drives by employees who use the devices to transport data outside the office. The consequences of losing drives loaded with such information can be significant, including the loss of customer data, financial information, business plans and other confidential information, with the associated risk of reputation damage. Major dangers of USB drives: USB flash drives pose two major challenges to information system security: data leakage owing to their small size and ubiquity and system compromise through infections from computer viruses, malware and spyware. Major dangers of USB drives: Data leakage The large storage capacity of USB flash drives relative to their small size and low cost means that using them for data storage without adequate operational and logical controls may pose a serious threat to information availability, confidentiality and integrity. The following factors should be taken into consideration for securing important assets: Storage: USB flash drives are hard to track physically, being stored in bags, backpacks, laptop cases, jackets, trouser pockets or left at unattended workstations. Major dangers of USB drives: Usage: tracking corporate data stored on personal flash drives is a significant challenge; the drives are small, common and constantly moving. While many enterprises have strict management policies toward USB drives and some companies ban them outright to minimize risk, others seem unaware of the risks these devices pose to system security.The average cost of a data breach from any source (not necessarily a flash drive) ranges from less than $100,000 to about $2.5 million.A SanDisk survey characterized the data corporate end users most frequently copy: Customer data (25%) Financial information (17%) Business plans (15%) Employee data (13%) Marketing plans (13%) Intellectual property (6%) Source code (6%)Examples of security breaches resulting from USB drives include: In the UK: HM Revenue & Customs lost personal details of 6,500 private pension holders In the United States: a USB drive was stolen with names, grades, and social security numbers of 6,500 former students USB flash drives with US Army classified military information were up for sale at a bazaar outside Bagram, Afghanistan. Major dangers of USB drives: Malware infections In the early days of computer viruses, malware, and spyware, the primary means of transmission and infection was the floppy disk. Today, USB flash drives perform the same data and software storage and transfer role as the floppy disk, often used to transfer files between computers which may be on different networks, in different offices, or owned by different people. This has made USB flash drives a leading form of information system infection. When a piece of malware gets onto a USB flash drive, it may infect the devices into which that drive is subsequently plugged. Major dangers of USB drives: The prevalence of malware infection by means of USB flash drive was documented in a 2011 Microsoft study analyzing data from more than 600 million systems worldwide in the first half of 2011. The study found that 26 percent of all malware infections of Windows system were due to USB flash drives exploiting the AutoRun feature in Microsoft Windows. That finding was in line with other statistics, such as the monthly reporting of most commonly detected malware by antivirus company ESET, which lists abuse of autorun.inf as first among the top ten threats in 2011.The Windows autorun.inf file contains information on programs meant to run automatically when removable media (often USB flash drives and similar devices) are accessed by a Windows PC user. The default Autorun setting in Windows versions prior to Windows 7 will automatically run a program listed in the autorun.inf file when you access many kinds of removable media. Many types of malware copy themselves to removable storage devices: while this is not always the program's primary distribution mechanism, malware authors often build in additional infection techniques. Major dangers of USB drives: Examples of malware spread by USB flash drives include: The Duqu collection of computer malware. The Flame modular computer malware. The Stuxnet malicious computer worm. Solutions: Since the security of the physical drive cannot be guaranteed without compromising the benefits of portability, security measures are primarily devoted to making the data on a compromised drive inaccessible to unauthorized users and unauthorized processes, such as may be executed by malware. One common approach is to encrypt the data for storage and routinely scan USB flash drives for computer viruses, malware and spyware with an antivirus program, although other methods are possible. Solutions: Software encryption Software solutions such as BitLocker, DiskCryptor and the popular VeraCrypt allow the contents of a USB drive to be encrypted automatically and transparently. Also, Windows 7 Enterprise, Windows 7 Ultimate and Windows Server 2008 R2 provide USB drive encryption using BitLocker to Go. The Apple Computer Mac OS X operating system has provided software for disc data encryption since Mac OS X Panther was issued in 2003 (see also: Disk Utility).Additional software can be installed on an external USB drive to prevent access to files in case the drive becomes lost or stolen. Installing software on company computers may help track and minimize risk by recording the interactions between any USB drive and the computer and storing them in a centralized database. Solutions: Hardware encryption Some USB drives utilize hardware encryption in which microchips within the USB drive provide automatic and transparent encryption. Some manufacturers offer drives that require a pin code to be entered into a physical keypad on the device before allowing access to the drive. The cost of these USB drives can be significant but is starting to fall due to this type of USB drive gaining popularity. Solutions: Hardware systems may offer additional features, such as the ability to automatically overwrite the contents of the drive if the wrong password is entered more than a certain number of times. This type of functionality cannot be provided by a software system since the encrypted data can simply be copied from the drive. However, this form of hardware security can result in data loss if activated accidentally by legitimate users and strong encryption algorithms essentially make such functionality redundant. Solutions: As the encryption keys used in hardware encryption are typically never stored in the computer's memory, technically hardware solutions are less subject to "cold boot" attacks than software-based systems. In reality however, "cold boot" attacks pose little (if any) threat, assuming basic, rudimentary, security precautions are taken with software-based systems. Solutions: Compromised systems The security of encrypted flash drives is constantly tested by individual hackers as well as professional security firms. At times (as in January 2010) flash drives that have been positioned as secure were found to have been poorly designed such that they provide little or no actual security, giving access to data without knowledge of the correct password.Flash drives that have been compromised (and claimed to now be fixed) include: SanDisk Cruzer Enterprise Kingston DataTraveler BlackBox Verbatim Corporate Secure USB Flash Drive Trek Technology ThumbDrive CRYPTOAll of the above companies reacted immediately. Kingston offered replacement drives with a different security architecture. SanDisk, Verbatim, and Trek released patches. Remote management: In commercial environments, where most secure USB drives are used, a central/remote management system may provide organizations with an additional level of IT asset control, significantly reducing the risks of a harmful data breach. This can include initial user deployment and ongoing management, password recovery, data backup, remote tracking of sensitive data and termination of any issued secure USB drives. Such management systems are available as software as a service (SaaS), where Internet connectivity is allowed, or as behind-the-firewall solutions. SecureData, Inc offers a software free Remote Management Console that runs from a browser. By using an app on a smartphone, Admins can manage who, when and where USB devices were last accessed with a complete audit trail. Used by Hospitals, large enterprises, Universities and the federal government to track access and protect data in transit and at rest.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Production equipment control** Production equipment control: Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most widely known architectures involve hierarchy, polyarchy, hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms, which may or may not utilize formal methods in their design.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PIN proteins** PIN proteins: PIN proteins are integral membrane proteins in plants that transport the anionic form of the hormone auxin across membranes. Most of the PIN proteins (e.g. PIN1/2/3/4/7 in the model plant Arabidopsis thaliana) localize at the plasma membrane (PM) where they serve as secondary active transporters involved in the efflux of auxin. The PM-localized PIN proteins show asymmetrical localisations on the membrane and are therefore responsible for polar auxin transport. Some other members of the PIN family (e.g. PIN5 and 8 in Arabidopsis) localize mostly at the ER-membrane or have a dual PM and ER localisation (e.g. PIN6 in Arabidopsis). These PIN proteins regulate the partitioning of auxin within the cell. PIN proteins: The PM-localized PIN proteins physically interact with a few members of the large PGP family of transporters that also work as auxin efflux carriers (PGP1 and PGP19 in Arabidopsis). These interactions result in a synergistic increase in auxin efflux. The activity and localization of the PM-localized PIN proteins is regulated by several phosphorylations on their large cytosolic hydrophilic loop carried out by kinases of the AGC family (e.g. PID, WAG1, WAG2, PID2 in Arabidopsis) and the D6PK kinase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tube (container)** Tube (container): A tube, squeeze tube, or collapsible tube is a collapsible package which can be used for viscous liquids such as toothpaste, artist's paint, adhesive, caulk, & ointments. Basically, a tube is a cylindrical, hollow piece with a round or oval profile, made of plastic, paperboard, aluminum, or other metal. In general, on one end of the tube body there is a round orifice, which can be closed by different caps and closures. The orifice can be shaped in many different ways: plastic nozzles in various styles and lengths are most typical. The other end is sealed either by welding or by folding. Tube (container): Typical tube sizes range from 3ml to 300ml. Most tubes are designed to be dispensed with hand pressure, but some are used with a tube key at the base to help roll them up. History: John Goffe Rand, an American portrait painter, invented the squeezable metal tube in 1841 for paint.Toothpaste in a tube was introduced by Johnson & Johnson in 1889. Not much later, a New London dentist, Washington Sheffield, started selling toothpaste in lead tubes in the 1890s. Materials: The earliest collapsible tubes were made of tin, zinc, or lead, sometimes coated with wax on the inside.Aluminum tube caps and closures are generally threaded, as is the nozzle. Aluminium tubes generally have the far end folded several times after the contents have been added. The tube is typically hermetically sealed and nearly germ-free due to the high temperatures during the production process. The inside of the tube can be coated to prevent the material from reacting with the contents. Aluminum tubes are produced by impact extrusion from a small round blank. Designs can be printed onto the tube, using the wet-in-wet offset printing method. Six tones are often used. The filled content can be squeezed out by finger pressure. The main characteristic of aluminium tubes is the total separation of the contents from the surrounding atmosphere; therefore, such tubes are especially suitable for the packaging of highly perishable contents. Aluminium tubes are used for cosmetics, pharmaceuticals, food, paint, and technical products. Materials: Tubes can also be produced in plastic, most commonly polyethylene. Plastic tubes are used for cosmetics such as hand creams, and also some foodstuffs. The plastic tube retains its shape after each squeeze unlike laminate tubes such as toothpaste tubes. Plastic tubes can be highly decorated or have a special additive such as soft touch to make the tube more appealing during use or at the point of sale. Plastic tubes are produced by extrusion. A sleeve is first produced on a specialised extrusion machine. It must be produced to a very high standard (for decoration purposes) and also to tight tolerances, compatible with automated processes after extrusion. Once the sleeve is produced, the tube head is fitted using an automated heading machine. Tube printing using specialised printing machines such as silk screen printing applies the desired decoration. The open tubes are typically filled and sealed at a separate facility. Multi-layer plastic tubes have become increasingly popular; they isolate the contents better from the air, allowing them to be used for a wider range of products, such as food. Applications: Many commodities are commonly sold in collapsible tubes: Toothpaste, formerly metal, now of plastic Viscous artists' paint, including oil paint, acrylic paint, and concentrated watercolor paints (which must be diluted with water); typically metal Pastes used in food, such as anchovy paste, tomato paste, mustard, mayonnaise Pharmaceutical ointments Adhesives, caulk, glue, and sealants (larger quantities may use rigid caulk cartridges) Other cosmetics and gels
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cab rank** Cab rank: A cab rank (in British English) or taxicab stand (in American English) is an area where taxicabs queue to await passengers. Cab rank: Cab rank may also refer to: Bank (in Cockney rhyming slang) A flying reserve of fighter-bomber aircraft that can be called in to provide close air support (the term cab rank was used by the RAF during World War II) Cab-rank rule, the obligation in English law of a barrister to accept any work in a field in which he professes himself competent, at a court at which he normally appears and at his usual rates
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blood-oxygen-level-dependent imaging** Blood-oxygen-level-dependent imaging: Blood-oxygen-level-dependent imaging, or BOLD-contrast imaging, is a method used in functional magnetic resonance imaging (fMRI) to observe different areas of the brain or other organs, which are found to be active at any given time. Theory: Neurons do not have internal reserves of energy in the form of sugar and oxygen, so their firing causes a need for more energy to be brought in quickly. Through a process called the haemodynamic response, blood releases oxygen to active neurons at a greater rate than to inactive neurons. This causes a change of the relative levels of oxyhemoglobin and deoxyhemoglobin (oxygenated or deoxygenated blood) that can be detected on the basis of their differential magnetic susceptibility. Theory: In 1990, three papers published by Seiji Ogawa and colleagues showed that hemoglobin has different magnetic properties in its oxygenated and deoxygenated forms (deoxygenated hemoglobin is paramagnetic and oxygenated hemoglobin is diamagnetic), both of which could be detected using MRI. This leads to magnetic signal variation which can be detected using an MRI scanner. Given many repetitions of a thought, action or experience, statistical methods can be used to determine the areas of the brain which reliably have more of this difference as a result, and therefore which areas of the brain are most active during that thought, action or experience. Criticism and limitations: Although most fMRI research uses BOLD contrast imaging as a method to determine which parts of the brain are most active, because the signals are relative, and not individually quantitative, some question its rigor. Other methods which propose to measure neural activity directly have been attempted (for example, measurement of the Oxygen Extraction Fraction, or OEF, in regions of the brain, which measures how much of the oxyhemoglobin in the blood has been converted to deoxyhemoglobin), but because the electromagnetic fields created by an active or firing neuron are so weak, the signal-to-noise ratio is extremely low and statistical methods used to extract quantitative data have been largely unsuccessful so far. Criticism and limitations: The typical discarding of the low-frequency signals in BOLD-contrast imaging came into question in 1995, when it was observed that the "noise" in the area of the brain that controls right-hand movement fluctuated in unison with similar activity in the area on the opposite side of the brain associated with left-hand movement. BOLD-contrast imaging is only sensitive to differences between two brain states, so a new method was needed to analyse these correlated fluctuations called resting state fMRI. History: Its proof of concept of blood-oxygen-level-dependent contrast imaging was provided by Seiji Ogawa and Colleagues in 1990, following an experiment which demonstrated that an in vivo change of blood oxygenation could be detected with MRI. In Ogawa's experiments, blood-oxygen-level-dependent imaging of rodent brain slice contrast in different components of the air. At high magnetic fields, water proton magnetic resonance images of brains of live mice and rats under anesthetization have been measured by a gradient echo pulse sequence. Experiments shown that when the content of oxygen in the breathing gas changed gradually, the contrast of these images changed gradually. Ogawa proposed and proved that the oxyhemoglobin and deoxyhemoglobin is the major contribution of this difference.Other notable pioneers of BOLD fMRI include Kenneth Kwong and colleagues, who first used the technique in human participants in 1992.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Defeasibility (linguistics)** Defeasibility (linguistics): In the linguistic field of pragmatics, an inference is said to be defeasible or cancellable if it can be made to disappear by the addition of another statement, or an appropriate context. For example, sentence [i] would normally implicate [ii] by scalar implicature: i: Alice has three children. Defeasibility (linguistics): ii: Alice has exactly three children.But the implicature can be cancelled by the modification in [ib]: ib: Alice has three children, and possibly more.Whereas conversational implicatures and presuppositions may be cancelled, an entailment may not be. For example, [i] entails the proposition "Alice has at least three children", and this cannot be cancelled with a modification like: ic: Alice has three children, and possibly less. Explicit and contextual cancellation: Grice, the originator of the concept of implicature, draws a distinction between explicit and contextual cancellation. He calls an implicature p explicitly cancellable if it is possible to cancel it by adding a statement to the effect of "but not p" to the utterance which would otherwise implicate it. For example: There's beer in the fridge. But that's not to say I'm offering you any.An implicature is contextually cancellable if it can fail to manifest in a different context. For example, if Bob says "We have two spare bedrooms", this would normally implicate that his house has exactly two spare bedrooms. But this implicature disappears if Bob is speaking with Carole and Diane who are planning a visit to Bob's city and looking for a place to stay.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alarplasty** Alarplasty: Alarplasty (or, less commonly, alaplasty) is a plastic surgery procedure in which a wedge of the wing of the nose is removed in order to alter the shape of the nostrils. Alarplasty may be used to increase or decrease the width of the nostrils, for either cosmetic or functional reasons. In humans it may also make the nose perceptibly narrower.Temporary swelling is a common consequence of alarplasty.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded