text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Zigzag code**
Zigzag code:
In coding theory, a zigzag code is a type of linear error-correcting code introduced by Ping, Huang & Phamdo (2001). They are defined by partitioning the input data into segments of fixed size, and adding sequence of check bits to the data, where each check bit is the exclusive or of the bits in a single segment and of the previous check bit in the sequence.
Zigzag code:
The code rate is high: J/(J + 1) where J is the number of bits per segment. Its worst-case ability to correct transmission errors is very limited: in the worst case it can only detect a single bit error and cannot correct any errors. However, it works better in the soft-decision model of decoding: its regular structure allows the task of finding a maximum-likelihood decoding or a posteriori probability decoding to be performed in constant time per input bit. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solid stress**
Solid stress:
The stresses, one of the physical hallmarks of cancer, is exerted by the solid components of a tissue and accumulated within solid structural components (i.e., cells, collagen, and hyaluronan) during growth and progression.
Solid stress:
Solid stress in tumors is a residual stress that is elevated because of abnormal tumor growth and resistance to growth from the surrounding normal tissues or from within the tumors. Solid stress, independent of the interstitial fluid pressure, induces hypoxia and impedes drug delivery by compressing blood vessels in tumors. Solid stress is heterogeneous in tumors with tensile stresses distributed more at the periphery of the tumor, and compressive stresses more at the tumor core. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gh0st RAT**
Gh0st RAT:
Gh0st RAT is a Trojan horse for the Windows platform that the operators of GhostNet used to hack into many sensitive computer networks. It is a cyber spying computer program. The "Rat" part of the name refers to the software's ability to operate as a "Remote Administration Tool".
Gh0st RAT:
The GhostNet system disseminates malware to selected recipients via computer code attached to stolen emails and addresses, thereby expanding the network by allowing more computers to be infected. According to the Infowar Monitor (IWM), "GhostNet" infection causes computers to download a Trojan known as "Gh0st RAT" that allows attackers to gain complete, real-time control. Such a computer can be controlled or inspected by its hackers, and the software even has the ability to turn on the camera and audio-recording functions of an infected computer that has such capabilities, enabling monitors to see and hear what goes on in a room. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Weblate**
Weblate:
Weblate is an open source web-based translation tool with version control. It includes several hundred languages with basic definitions, and enables the addition of more language definitions, all definitions can be edited by the web community or a defined set of people, as well as through integrating machine translation, such as DeepL, Amazon Translate, or Google Translate.
Stated goals:
Weblate aims to facilitate web based translation with tight Git integration for a wide range of file formats, helping translators contribute without knowledge of Git workflow.
Translations closely follow development, as they are hosted within the same repository as the source code.
There is no plan for heavy conflict resolution, as it is argued these should primarily be handled on the Git side.
Project name:
The project's name is a portmanteau of words web and translate.
Notable uses:
These are some projects using Weblate: Godot Engine FreePBX OsmAnd phpMyAdmin Unknown Horizons OpenPetra Turris Omnia Debian Handbook LibreOffice and Collabora Online Monero openSUSE Open Journal Systems H5P Kodi CryptPad ParaView | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Applications of nanotechnology**
Applications of nanotechnology:
The applications of nanotechnology, commonly incorporate industrial, medicinal, and energy uses. These include more durable construction materials, therapeutic drug delivery, and higher density hydrogen fuel cells that are environmentally friendly. Being that nanoparticles and nanodevices are highly versatile through modification of their physiochemical properties, they have found uses in nanoscale electronics, cancer treatments, vaccines, hydrogen fuel cells, and nanographene batteries.
Applications of nanotechnology:
Nanotechnology's use of smaller sized materials allows for adjustment of molecules and substances at the nanoscale level, which can further enhance the mechanical properties of materials or grant access to less physically accessible areas of the body.
Industrial applications:
Potential applications of carbon nanotubes Nanotubes can help with cancer treatment. They have been shown to be effective tumor killers in those with kidney or breast cancer. Multi-walled nanotubes are injected into a tumor and treated with a special type of laser that generates near-infrared radiation for around half a minute. These nanotubes vibrate in response to the laser, and heat is generated. When the tumor has been heated enough, the tumor cells begin to die. Processes like this one have been able to shrink kidney tumors by up to four-fifths.Ultrablack materials, made up of “forests” of carbon nanotubes, are important in space, where there is more light than is convenient to work with. Ultrablack material can be applied to camera and telescope systems to decrease the amount of light and allow for more detailed images to be captured.Nanotubes show promise in treating cardiovascular disease. They could play an important role in blood vessel cleanup. Theoretically, nanotubes with SHP1i molecules attached to them would signal macrophages to clean up plaque in blood vessels without destroying any healthy tissue. Researchers have tested this type of modified nanotube in mice with high amounts of plaque buildup; the mice that received the nanotube treatment showed statistically significant reductions in plaque buildup compared to the mice in the placebo group. Further research is needed for this treatment to be given to humans.
Industrial applications:
Nanotubes may be used in body armor for future soldiers. This type of armor would be very strong and highly effective at shielding soldiers’ bodies from projectiles and electromagnetic radiation. It is also possible that the nanotubes in the armor could play a role in keeping an eye on soldiers’ conditions.
Industrial applications:
Construction Nanotechnology's ability to observe and control the material world at a nanoscopic level can offer great potential for construction development. Nanotechnology can help improve the strength and durability of construction materials, including cement, steel, wood, and glass.By applying nanotechnology, materials can gain a range of new properties. The discovery of a highly ordered crystal nanostructure of amorphous C-S-H gel and the application of photocatalyst and coating technology result in a new generation of materials with properties like water resistance, self-cleaning property, wear resistance, and corrosion protection. Among the new nanoengineered polymers, there are highly efficient superplasticizers for concrete and high-strength fibers with exceptional energy absorbing capacity.Experts believe that nanotechnology remains in its exploration stage and has potential in improving conventional materials such as steel. Understanding the composite nanostructures of such materials and exploring nanomaterials' different applications may lead to the development of new materials with expanded properties, such as electrical conductivity as well as temperature-, moisture- and stress-sensing abilities.Due to the complexity of the equipment, nanomaterials have high cost compared to conventional materials, meaning they are not likely to feature high-volume building materials. In special cases, nanotechnology can help reduce costs for complicated problems. But in most cases, the traditional method for construction remains more cost-efficient. With the improvement of manufacturing technologies, the costs of applying nanotechnology into construction have been decreasing over time and are expected to decrease more.
Industrial applications:
Nanoelectronics Nanoelectronics refers to the application of nanotechnology on electronic components. Nanoelectronics aims to improve the performance of electronic devices on displays and power consumption while shrinking them. Therefore, nanoelectronics can help reach the goal set up in Moore's law, which predicts the continued trend of scaling down in the size of integrated circuits.
Nanoelectronics is a multidisciplinary area composed of quantum physics, device analysis, system integration, and circuit analysis. Since de Broglie wavelength in the semiconductors may be on the order of 100 nm, the quantum effect at this length scale becomes essential. The different device physics and novel quantum effects of electrons can lead to exciting applications.
Health applications:
Nanobiotechnology The terms nanobiotechnology and bionanotechnology refer to the combination of ideas, techniques, and sciences of biology and nanotechnology. More specifically, nanobiotechnology refers to the application of nanoscale objects for biotechnology while bionanotechnology refers to the use of biological components in nanotechnology.The most prominent intersection of nanotechnology and biology is in the field of nanomedicine, where the use of nanoparticles and nanodevices has many clinical applications in delivering therapeutic drugs, monitoring health conditions, and diagnosing diseases. Being that much of the biological processes in the human body occur at the cellular level, the small size of nanomaterials allows for them to be used as tools that can easily circulate within the body and directly interact with intercellular and even intracellular environments. In addition, nanomaterials can have physiochemical properties that differ from their bulk form due to their size, allowing for varying chemical reactivities and diffusion effects that can be studied and changed for diversified applications.
Health applications:
A common application of nanomedicine is in therapeutic drug delivery, where nanoparticles containing drugs for therapeutic treatment of disease are introduced into the body and act as vessels that deliver the drugs to the targeted area. The nanoparticle vessels, which can be made of organic or synthetic components, can further be functionalized by adjusting their size, shape, surface charge, and surface attachments (proteins, coatings, polymers, etc.). The opportunity for functionalizing nanoparticles in such ways is especially beneficial when targeting areas of the body that have certain physiochemical properties that prevent the intended drug from reaching the targeted area alone; for example, some nanoparticles are able to bypass the Blood Brain Barrier to deliver therapeutic drugs to the brain. Nanoparticles have recently been used in cancer therapy treatments and vaccines.In vivo imaging is also a key part in nanomedicine, as nanoparticles can be used as contrast agents for common imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). The ability for nanoparticles to localize and circulate in specific cells, tissues, or organs through their design can provide high contrast that results in higher sensitivity imaging, and thus can be applicable in studying pharmacokinetics or visual disease diagnosis.
Energy applications:
The energy applications of nanotechnology relates to using the small size of nanoparticles to store energy more efficiently. This promotes the use of renewable energy through green nanotechnology by generating, storing, and using energy without emitting harmful greenhouse gases such as carbon dioxide.
Energy applications:
Solar Cells Nanoparticles used in solar cells are increasing the amount of energy absorbed from sunlight. Solar cells are currently created from layers of silicon that absorb sunlight and convert it to usable electricity. Using noble metals such as gold coated on top of silicon, researchers have found that they are able to transform energy more efficiently into electrical current. Much of the energy that is loss during this transformation is due to heat, however by using nanoparticles there is less heat emitted thus producing more electricity.
Energy applications:
Hydrogen Fuel Cells Nanotechnology is enabling the use of hydrogen energy at a much higher capacity. Hydrogen fuel cells, while they are not an energy source themselves, allow for storing energy from sunlight and other renewable sources in an environmentally-friendly fashion without any CO2 emissions. Some of the main drawbacks of traditional hydrogen fuel cells are that they are expensive and not durable enough for commercial uses. However, by using nanoparticles, both the durability and price over time improve significantly. Furthermore, conventional fuel cells are too large to be stored in volume, but researchers have discovered that nanoblades can store greater volumes of hydrogen that can then be saved inside carbon nanotubes for long-term storage.
Energy applications:
Nanographene Batteries Nanotechnology is giving rise to nanographene batteries that can store energy more efficiently and weigh less. Lithium-ion batteries have been the primary battery technology in electronics for the last decade, but the current limits in the technology make it difficult to densify batteries due to the potential dangers of heat and explosion. Graphene batteries being tested in experimental electric cars have promised capacities 4 times greater than current batteries with the cost being 77% lower. Additionally, graphene batteries provide stable life cycles of up to 250,000 cycles, which would allow electric vehicles and long-term products a reliable energy source for decades. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Relative pitch**
Relative pitch:
Relative pitch is the ability of a person to identify or re-create a given musical note by comparing it to a reference note and identifying the interval between those two notes. For example, if the note Do and Fa is played on a piano, a person with relative pitch would be able to identify the second note from the first note given that they know that the first note is Do without looking.
Detailed definition:
Relative pitch implies some or all of the following abilities: Determine the distance of a musical note from a set point of reference, e.g. "three octaves above middle C" Identify the intervals between given tones, regardless of their relation to concert pitch (A = 440 Hz) Correctly sing a melody by following musical notation, by pitching each note in the melody according to its distance from the previous note.
Detailed definition:
Hear a melody for the first time, then name the notes relative to a reference pitch.This last criterion, which applies not only to singers but also to instrumentalists who rely on their own skill to determine the precise pitch of the notes played (wind instruments, fretless string instruments like violin or viola, etc.), is an essential skill for musicians in order to play successfully with others. An example, is the different concert pitches used by orchestras playing music from different styles (a baroque orchestra using period instruments might decide to use a higher-tuned pitch).
Detailed definition:
Compound intervals (intervals greater than an octave) can be more difficult to detect than simple intervals (intervals less than an octave).Interval recognition is used to identify chords, and can be applied to accurately tune an instrument with respect to a given reference tone, even when the tone is not in concert pitch.
Prevalence and training:
Unlike absolute pitch (sometimes called "perfect pitch"), relative pitch is quite common among musicians, especially musicians who are used to playing "by ear", and a precise relative pitch is a constant characteristic among good musicians.
Prevalence and training:
Unlike perfect pitch, relative pitch can be developed through ear training. Computer-aided ear training is becoming a popular tool for musicians and music students, and various software is available for improving relative pitch.Some music teachers teach their students relative pitch by having them associate each possible interval with the first two notes of a popular song. Another method of developing relative pitch is playing melodies by ear on a musical instrument, especially one that, unlike a piano or other keyboard or fretted instrument, requires a specific manual or blown adjustment for each particular tone.
Prevalence and training:
Indian musicians learn relative pitch by singing intervals over a drone, which Mathieu (1997) described in terms of occidental just intonation terminology. Many Western ear training classes used solfège to teach students relative pitch, while others use numerical sight-singing. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fortress (chess)**
Fortress (chess):
In chess, a fortress is an endgame drawing technique in which the side behind in material sets up a zone of protection that the opponent cannot penetrate. This might involve keeping the enemy king out of one's position, or a zone the enemy cannot force one out of (e.g. see the opposite-colored bishops example). An elementary fortress is a theoretically drawn (i.e. a book draw) position with reduced material in which a passive defense will maintain the draw.Fortresses commonly have the following characteristics: Useful pawn breakthroughs are not possible.
Fortress (chess):
If the stronger side has pawns, they are firmly blocked.
The stronger side's king cannot penetrate because it is either cut off or near the edge of the board.
Zugzwang positions cannot be forced because the defender has waiting moves available.Fortresses pose a problem for computer chess: computers fail to recognize fortress-type positions and are unable to achieve the win against them despite claiming a winning advantage.
Fortress in a corner:
Perhaps the most common type of fortress, often seen in endgames with only a few pieces on the board, is where the defending king is able to take refuge in a corner of the board and cannot be forced away or checkmated by the superior side. These two diagrams furnish two classic examples. In both cases, Black simply shuffles their king between a8 and the available square adjacent to a8 (a7, b7, or b8, depending on the position of the white king and pawn). White has no way to dislodge Black's king, and can do no better than a draw by stalemate or some other means.
Fortress in a corner:
Note that the bishop and wrong rook pawn ending (i.e. where the pawn is a rook pawn whose promotion square is the color opposite to that of the bishop) in the diagram is a draw even if the pawn is on the seventh rank or further back on the a-file. Heading for a bishop and wrong rook pawn ending is a fairly common drawing resource available to the inferior side.The knight and rook pawn position in the diagram, however, is a draw only if White's pawn is already on the seventh rank, making this drawing resource available to the defender much less frequently. White wins if the pawn is not yet on the seventh rank and is protected by the knight from behind. With the pawn on the seventh rank, Black has a stalemate defense with their king in the corner.
Fortress in a corner:
Example game: Serper vs. Nakamura, 2004 A fortress is often achieved by a sacrifice, such as of a piece for a pawn. In the game between Gregory Serper and Hikaru Nakamura, in the 2004 U.S. Chess Championship, White would lose after 1.Nd1 Kc4 or 1.Nh1 Be5 or 1.Ng4 Bg7. Instead he played 1. Nxe4! Kxe4 2. Kf1!Heading for h1. After another 10 moves the position in the following diagram was reached: Black has no way of forcing White's king away from the corner, so he played 12... Kf2and after 13.h4 gxh4 the game was drawn by stalemate.
Fortress in a corner:
Back-rank defense The back-rank defense in some rook and pawn versus rook endgames is another type of fortress in a corner (see diagram). The defender perches their king on the pawn's queening square, and keeps their rook on the back rank (on the "long side" of the king, not, e.g., on h8 in the diagram position) to guard against horizontal checks. If 1.Rg7+ in the diagram position, Black heads into the corner with 1...Kh8! Note that this defense works only against rook pawns and knight pawns.
Fortress in a corner:
Rook vs. bishop In the ending of a rook versus a bishop, the defender can form a fortress in the "safe" corner—the corner that is not of the color on which the bishop resides (see diagram). White must release the potential stalemate, but they cannot improve their position.
1. Rc3 Ba2 2. Rc2 Bb3 3. Rc7 Bg8 Pawn and bishop In this position from de la Villa, White draws if their king does not leave the corner. It is also a draw if the bishop is on the other color, so it is not a case of the wrong bishop.
Rook and pawn versus queen:
In the diagram, Black draws by moving his rook back and forth between the d6- and f6-squares, or moves his king when checked, staying behind the rook and next to the pawn. This fortress works when all of these conditions are met: The pawn is still on its second rank.
The pawn is on files b through g.
The pawn is protecting its rook on the third rank.
The opposing king is beyond the defender's third rank.
The defending king protects its pawn.The white king is not able to cross the rank of the black rook, and the white queen is unable to do anything useful.
Rook and pawn versus queen:
1. Qd5+ Rd6 2. Qb5+ Kd8 3. Qb8+ Kd7 4. Qb5+ ½-½ Positions such as these (when the defending rook and king are near the pawn and the opposing king cannot attack from behind) are drawn when (see diagram): the pawn is on the c-, d-, e-, or f-file and the second, sixth, or seventh rank the pawn is anywhere on the b- or g-file the pawn is on the a- or h-file and the third or seventh rank.Otherwise, the queen wins.
Rook and pawn versus queen:
Example from game In this position, with Black to move, Black can reach a drawing fortress. 1...b4 2. Kd6 Rc3 3. Kd7and now 3...Ka3 and several other moves reach the fortress. In the actual game, Black made the weak move 3...Rd3? and lost.
Rook and pawn versus queen:
Similar example In this 1959 game between Whitaker and Ferriz, White sacrificed a rook for a knight in order to exchange a pair of pawns and reach this position, and announced that it was a draw because (1) the queen cannot mate alone, and (2) the black king and pawn cannot approach to help. However, endgame tablebase analysis shows Black to have a forced win in 19 moves starting with 50... Qc7+ (the only winning move), taking advantage of the fact that the rook is currently unprotected – again illustrating how tablebases are refining traditional endgame theory.
Rook and pawn versus queen:
Example with more pawns From the diagram, in Salov vs. Korchnoi, Wijk aan Zee 1997, White was able to hold a draw with a rook versus a queen, even with the sides having an equal number of pawns. He kept his rook on the fifth rank blocking in Black's king, and was careful not to lose his rook to a fork or allow a queen sacrifice for the rook in circumstances where that would win for Black. The players agreed to a draw after: 48. Kg2 Kg6 49. Rh5 Qe2+ 50. Kg3 Qf1 51. Kf4 Qe1 52. Rd5 Qc1+ 53. Kg3 Qc7+ 54. Kg2 Qf4 55. Rh5 Kf6 56. Rd5 Ke6 57. Rh5 Qd2+ 58. Kg3 f6 59. Rf5 Qc1 60. Rh5 Qg1+ 61. Kf4 Qe1 62. Rb5 Qc1+ 63. Kg3 Qg1+ 64. Kf4 Qh2+ 65. Ke3 Kf7 66. Rh5 Qg1+ 67. Kf4 Kg6 68. Rd5 Qh2+ 69. Ke3 Kf7 70. Rh5 Qg1+ 71. Kf4 Ke6 72. Rb5 Qh2+ 73. Ke3 Kd6 74. Rf5 Qb2 75. Rh5 Ke6 76. Kf4 Qc3 77. Kg3 Qc7+ 78. Kg2 Qf7 79. Rb5 Qe8 80. Rf5 Qg6 81. Rb5 ½–½
Opposite-colored bishops:
In endings with bishops of opposite colors (i.e. where one player has a bishop that moves on light squares, while the other player's bishop moves on dark squares), it is often possible to establish a fortress, and thus hold a draw, when one player is one, two, or occasionally even three pawns behind. A typical example is seen in the diagram. White, although three pawns behind, has established a drawing fortress, since Black has no way to contest White's stranglehold over the light squares. White simply keeps his bishop on the h3–c8 diagonal.
Opposite-colored bishops:
Example from game In an endgame with opposite-colored bishops, positional factors may be more important than material. In this position, Black sacrifices a pawn (leaving him three pawns down) to reach a fortress.
1... Kf5! 2. Kxf7 Bh5+ 3. Kg7 Bd1 4. Be7 ½-½After 4...Be2 5.Kh6 Bd1 6.h5 Black just waits by playing 6...Be2.
Queen versus two minor pieces:
Here are drawing fortresses with two minor pieces versus a queen. Usually the defending side will not be able to get to one of these positions.
Queen versus two minor pieces:
Bishop and knight The bishop and knight fortress is another type of fortress in a corner. If necessary, the king can move to one of the squares adjacent to the corner, and the bishop can retreat to the corner. This gives the inferior side enough tempo moves to avoid zugzwang. For example: 1. Kb5 Ka7 2. Qd8 Ba8 3. Ka5 Bb7.
Queen versus two minor pieces:
Two bishops In the two bishop versus queen ending, the queen wins if the Lolli position is not reached, but some of them take up to seventy-one moves to either checkmate or win a bishop, so the fifty-move rule comes into play. From the diagram: 1. Qe7+ Kc8 2. Qe6+ Kb7 3. Qd6 Ba7 4. Qe7+ Kb6! 5. Qd8+ Kb7! 6. Ka5 Bc5!and White cannot prevent ... Bb6, which gets back to the Lolli position.
Queen versus two minor pieces:
Two knights In the two knights fortress, the knights are next to each other and their king should be between them and the attacking king. The defender must play accurately, though.
Queen versus two minor pieces:
There are several drawing positions with two knights against a queen. The best way is to have the knights adjacent to each other on a file or rank, with their king between them and the enemy king. This is not a true fortress since it is not static. The position of the knights may have to change depending on the opponent's moves. In this position (Lolli, 1763), 1. Qd1 Nd2+ 2. Ke2 Nb3and Black has an ideal defensive position.
Queen versus two minor pieces:
If the knights cannot be adjacent to each other on a file or rank, the second best position is if they are next to each other diagonally (see diagram).
The third type of defensive formation is with the knights protecting each other, but this method is more risky.
Queen versus two minor pieces:
With pawns Sometimes the two minor pieces can achieve a fortress against a queen even where there are pawns on the board. In Ree-Hort, Wijk aan Zee 1986 (first diagram), Black had the material disadvantage of rook and bishop against a queen. Dvoretsky writes that Black would probably lose after the natural 1...Bf2+? 2.Kxf2 Rxh4 because of 3.Kg3 Rh7 4.Kf3, followed by a king march to c6, or 3.Qg7!? Rxf4+ 4.Kg3 Rg4+ 5.Kf3, threatening 6.Qf6 or 6.Qc7. Instead, Hort forced a draw with 1... Rxh4!! 2. Kxh4 Bd4! (imprisoning White's queen) 3. Kg3 Ke7 4.Kf3 Ba1 (second diagram), and the players agreed to a draw. White's queen has no moves, all of Black's pawns are protected, and his bishop will shuttle back and forth on the squares a1, b2, c3, and d4.
Knight versus a rook and pawn:
At the great New York City 1924 tournament, former world champion Emanuel Lasker was in trouble against his namesake Edward Lasker, but surprised everyone by discovering a new endgame fortress. Despite having only a knight for a rook and pawn, White draws by moving his knight back and forth between b2 and a4. Black's only real winning try is to get his king to c2. However, to do so Black has to move his king so far from the pawn that White can play Ka3–b2 and Nc5xb3, when the rook versus knight ending is an easy draw. The game concluded: 93. Nb2 Ke4 94. Na4 Kd4 95. Nb2 Rf3 96. Na4 Re3 97. Nb2 Ke4 98. Na4 Kf3 99. Ka3! Ke4If 99...Ke2, 100.Nc5 Kd2 101.Kb2! (101.Nxb3+?? Kc2 and Black wins) and 102.Nxb3 draws. 100. Kb4 Kd4 101. Nb2 Rh3 102. Na4 Kd3 103. Kxb3 Kd4+ ½–½
Bishop versus rook and bishop pawn on the sixth rank:
A bishop can make a fortress versus a rook and a bishop pawn on the sixth rank, if the bishop is on the color of the pawn's seventh rank square and the defending king is in front of the pawn. In this position, White would win if he had gotten the king to the sixth rank ahead of the pawn. Black draws by keeping the bishop on the diagonal from a2 to e6, except when giving check. The bishop keeps the white king off e6 and checks him if he goes to g6, to drive him away. A possible continuation: 1... Ba2 2. Kf42.f7 is an interesting attempt, but then Black plays 2...Kg7! (not 2...Bxf7?? when White wins by playing 3.Kf6) and then 3...Bxf7, with a draw. 2...Kg7 prevents 3.Kf6, which would win.
Bishop versus rook and bishop pawn on the sixth rank:
2... Bc4 3. Kg5 Bd5!The only move to draw, since the bishop must be able to check the king if it goes to g6.
4. Rc7 Ba2! 5. Kg6 Bb1+! 6. Kh6 Ba2! 7. Ra7If 7.f7 Bxf7!: the pawn can be safely captured when the white king is on h6.
7... Bc4Draw, because White cannot make progress.
Defense perimeter (pawn fortress):
A defense perimeter is a drawing technique in which the side behind in material or otherwise at a disadvantage sets up a perimeter, largely or wholly composed of a pawn chain, that the opponent cannot penetrate. Unlike other forms of fortress, a defense perimeter can often be set up in the middlegame with many pieces remaining on the board.
Defense perimeter (pawn fortress):
The position in the first diagram, a chess problem by W.E. Rudolph (La Strategie 1912), illustrates the defense perimeter. White already has a huge material disadvantage, but forces a draw by giving up his remaining pieces to establish an impenetrable defense perimeter with his pawns. White draws with 1. Ba4+! Kxa4 (1... Kc4?? 2. Bb3+! Kb5 3. c4+ Kc6 4. Ba4+!, forcing Rb5, wins for White) 2. b3+ Kb5 3. c4+ Kc6 4. d5+ Kd7 5. e6+! Kxd8 6. f5! (second diagram). Now Black is up two rooks and a bishop (normally an overwhelming material advantage) but has no hope of breaking through White's defense perimeter. The only winning attempts Black can make are to place his rooks on b5, c6, etc. and hope that White captures them. White draws by ignoring all such offers and simply shuffling his king about.
Defense perimeter (pawn fortress):
The above example may seem fanciful, but Black achieved a similar defense perimeter in Arshak Petrosian–Hazai, Schilde 1970 (first diagram) via a swindle. Black has a difficult endgame, since White can attack and win his a-pawn by force, and he has no counterplay. Black tried the extraordinary 45... Qb6!?, to which White replied with the obvious 46. Nxb6+? This is actually a critical mistake, enabling Black to establish an impenetrable fortress. White should have carried out his plan of winning Black's a-pawn, for example with 46.Qc1 (threatening 47.Nxb6+ cxb6 48.h4! gxh4 49.Qh1 and Qh3, winning) Qa7 47.Qd2 followed by Kb3, Nc3, Ka4, and Na2–c1–b3. 46... cxb6 Now Black threatens 47...h4, locking down the entire board with his pawns, so White tries to break the position open. 47. h4 gxh4 48. Qd2 h3! 49. gxh3 Otherwise 49...h2 draws. 49... h4! (second diagram) Black has established his fortress, and now can draw by simply moving his king around. The only way White could attempt to breach the fortress would be a queen sacrifice at some point (for example Qxa5 or Qxe5), but none of these give White winning chances as long as Black keeps his king near the center. The players shuffled their kings, and White's queen, around for six more moves before agreeing to a draw.
Defense perimeter (pawn fortress):
In Smirin-HIARCS, Smirin-Computers match 2002, the super-grandmaster looked to be in trouble against the computer, which has the bishop pair, can tie White's king down with ...g3, and threatens to invade with its king on the light squares. Smirin, however, saw that he could set up a fortress with his pawns. The game continued 46... g3 47. h3! A surprising move, giving Black a formidable protected passed pawn on the sixth rank, but it begins to build White's fortress, keeping Black's king out of g4. 47... Bc5 48. Bb4! Now Smirin gives HIARCS the choice between an opposite-colored bishops endgame (in which, moreover, White will play Be7 and win the h-pawn if Black's king comes to the center) and a bishop versus knight ending in which Smirin envisions a fortress. 48... Bxb4 49. axb4 Kf7 Black could try to prevent White's coming maneuver with 49...Bd3, but then White could play 50.Nf3 Kh5 (forced) 51.Nd4. 50. Nb5! Ke6 51. Nc3! Completing the fortress. Now Black's king has no way in, and his bishop can do nothing, since White's king can prevent ...Bf1, attacking White's only pawn on a light square. The game concluded: 51... Bc2 52. Kg2 Kd6 53. Kg1 Kc6 54. Kg2 b5 55. Kg1 Bd3 56. Kg2 Be4+ 57. Kg1 Bc2 58. Kg2 Bd3 59. Kg1 Be4 60. Kf1 ½–½
Other examples:
Here are some other drawing fortresses.
Fortresses against a bishop Fortresses against a knight Fortresses against a rook
Semi-fortress in two bishops vs. knight:
The endgame of two bishops versus a knight was thought to be a draw for more than one hundred years. It was known that the temporary defensive fortress in this position could be broken down after a number of moves, but it was assumed that the fortress could be reformed in another corner. Computer endgame tablebases show that the bishops generally win, but it takes up to 66 moves. It takes several moves to force Black out of the temporary fortress in the corner; then precise play with the bishops prevents Black from forming the fortress in another corner. The position in the diagram was thought to be a draw by Kling and Horwitz but computer analysis shows that White wins in 45 moves (either by checkmate or by winning the knight). All of the long wins in this endgame go through this type of semi-fortress position.
Semi-fortress in two bishops vs. knight:
This game between József Pintér and David Bronstein demonstrates the human play of the endgame. The defender has two ideas: (1) keep the king off the edge of the board and (2) keep the knight close to the king. White reaches the semi-fortress after 71. Nb2!, which falls after 75... Kb5!. White gets to a semi-fortress again in another corner after 90. Ng2+. After 100. Ke3 White cannot hold that semi-fortress any longer, but forms one in another corner after 112. Nb7!. On move 117 White claimed a draw by the fifty move rule.
Positional draw:
A "positional draw" is a concept most commonly used in endgame studies and describes an impasse other than stalemate. It usually involves the repetition of moves in which neither side can make progress or safely deviate. Typically a material advantage is balanced by a positional advantage. Fortresses and perpetual check are examples of positional draws. Sometimes they salvage a draw from a position that seems hopeless because of a material deficit. Grandmaster John Nunn describes a positional draw as a position in which one side has enough material to normally win and he is not under direct attack, but some special feature of the position (often a blockade) prevents him from winning.A simple example is shown in the game between Lajos Portisch and Lubomir Kavalek. White could have won easily with 1.Be1 Kc6 2.b4. However, play continued 1. b4? Nb8 2. b5 Nc6+! The only way to avoid the threatened 3...Nxa5 is 3.bxc6 Kxc6, but the resultant position is a draw because the bishop is on the wrong color to be able to force the rook pawn's promotion (see above, wrong bishop, and wrong rook pawn).
Positional draw:
Luděk Pachman cites the endgame position in the diagram as a simple example of a positional draw. White on move simply plays waiting moves with the bishop (Bb1–c2–d3). As for Black, "If he is unwilling to allow the transition to the drawn ending of Rook versus Bishop, nothing else remains for him but to move his Rook at [e5] continuously up and down the [e-file]." Pachman explains, "The indecisive result here contradicts the principles concerning the value of the pieces and is caused by the bad position of the black pieces (pinned rook at [e4]).".
Positional draw:
This position from a game between Mikhail Botvinnik and Paul Keres in the 1951 USSR Championship is drawn because the black king cannot get free and the rook must stay on the c-file. The players agreed to a draw four moves later.
The first diagram shows a position from a game between former World Champion Mikhail Tal and future World Champion Bobby Fischer from the 1962 Candidates Tournament in Curaçao. After 41 moves Tal had the advantage but Fischer sacrificed the exchange (a rook for a knight). The game was drawn on the 58th move.
In this position from a game between Pal Benko and International Master Jay Bonin, White realized that the blockade cannot be broken and the game is a draw despite the extra material.
Positional draw:
The position looks lost for White, as he cannot stop the h-pawn from queening, but he does have a defence which seems to defy the rules of logic. White will calmly construct a "fortress" which will hide his pieces from attack. The only weakness in White's "fortress" is the g-pawn. This pawn has to be defended by the bishop and the only square where this can be done safely is from h6.
Positional draw:
1. Bf6! White threatens to stop the advance of the h-pawn with ...Be5+; building the fortress immediately does not work: 1.f6? h2 2.Kf8 h1=Q 3.Kg7 (3.Kg8 Qg2 4.Bf8 Qa8 5.Kg7 Kd7 6.Kg8 Ke6 7.Kg7 Kf5 8.Kg8 Bb3 9.Kg7 Qh1−+) 3...Kd7 4.Bb4 Ke6 5.Bd2 Kf5 6.Be3 Qf3 7.Bd2 Qe2 8.Bc1 Qd1 9.Be3 Qd3 10.Bc1 Qc3−+; 1... Kd6 2. Be7+ 2.fxg6? This move destroys the fortress 2...fxg6 3.Be7+ Kc6−+. 2...Kc6 2... Ke5 White draws without a fortress after 3.fxg6 fxg6 4.Bd8 Kd6 5.Nf6! h2 6.Ne4+ Ke6 7.Nf2 Bd5 8.Bf6 h1=Q 9.Nxh1 Bxh1=; 3. f6! Chess computer programs have difficulty assessing "fortress" positions because the normal values for the pieces do not apply. White has achieved the closing of the long diagonal a8–h1. The only way to avoid this would be for Black to repeat moves. Now White can build his "fortress" without the worry of the queen getting to the back rank via the long diagonal.
Positional draw:
3...h2 4. Bf8! h1=Q 5. Bh6! with the idea of 6.Kf8 and 7.Kg7. White will be safe behind the barrier of pawns. It is a positional draw. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SelTrac**
SelTrac:
SelTrac is a digital railway signalling technology used to automatically control the movements of rail vehicles. It was the first fully automatic moving-block signalling system to be commercially implemented.
SelTrac:
SelTrac was originally developed in the 1970s by Standard Elektrik Lorenz (the "SEL" in the name) of Germany for the Krauss-Maffei Transurban, an automated guideway transit system proposed for the GO-Urban network in the Greater Toronto Area in Canada. Although the GO-Urban project failed, the Transurban efforts were taken over by an Ontario consortium led by the Urban Transportation Development Corporation (UTDC), and adapted to become its Intermediate Capacity Transit System (ICTS). This technology was first used on the SkyTrain network in Vancouver, British Columbia and the Scarborough RT in Toronto, Ontario.
SelTrac:
SelTrac was primarily sold and developed by Alcatel, through a subsidiary. SelTrac is now sold by Thales from their Canadian unit, after it purchased many of Alcatel's non-telephony assets.
Description:
SelTrac uses the twisted-loop concept developed by Siemens in the 1950s.Data communication is provided with either low frequency inductive loop or a high bandwidth, open-standards wireless system incorporating spread-spectrum radio technology.
Installations:
SelTrac is installed in many railways around the world, including the following: Ankara Metro Line M1 1997 SelTrac CBTC DTO Beijing Subway Line 4 2009 SelTrac CBTC/R (radio) ATO with Attendant Busan–Gimhae Light Rail Transit 2011 SelTrac CBTC/R UTO Detroit People Mover 1987 SelTrac CBTC UTO Dubai Metro – Red and Green Lines 2009/2011 SelTrac CBTC UTO Guangzhou Metro Line 3 2009/10 SelTrac CBTC DTO Line 9 2017 SelTrac CBTC DTO Line 14 2017 SelTrac CBTC DTO Line 21 2017 SelTrac CBTC DTO Hong Kong International Airport APM 2014/15 SelTrac CBTC/R UTO Hong Kong MTR Tuen Ma line 2003 SelTrac CBTC DTO West Rail line 2003 Ma On Shan line 2004 Kowloon Southern Link 2009 Tuen Ma line (Phase 1) 2020 Tuen Ma line (full line) 2021 Disneyland Resort line 2005 SelTrac CBTC/R UTO Hyderabad Metro Rail (Lines 1, 2, 3) 2017 SelTrac CBTC/R STO Incheon Subway Line 2 2014 SelTrac CBTC/R UTO Istanbul Metro M4 Kadikoy-Kartal Line 2012 SelTrac CBTC STO Jacksonville Skyway ASE 1998 SelTrac CBTC UTO John F. Kennedy International Airport AirTrain JFK APM 2003 SelTrac CBTC UTO Kuala Lumpur Rapid Rail Kelana Jaya Line 1998/2014 SelTrac CBTC UTO Ampang Line 2015 SelTrac CBTC/R DTO/UTO Las Vegas Monorail 2004 SelTrac CBTC/R UTO London Docklands Light Railway 1995 SelTrac CBTC DTO Lewisham Extension 1999 London City Airport Extension 2005 Woolwich Arsenal Extension 2009 Stratford Extension 2011 London Underground Jubilee line 2011 SelTrac TBTC STO Northern line 2014 SelTrac TBTC STO Metropolitan, District, Circle, Hammersmith & City 2020 SelTrac CBTC/R STO Mecca Metro 2011 SelTrac CBTC UTO Newark Liberty International Airport AirTrain Newark 1996/2001 SelTrac CBTC UTO New York City Subway BMT Canarsie Line – Phase III 2006 Interoperability Program SelTrac CBTC/R STO IRT Flushing Line 2016 SelTrac CBTC/R STO Ottawa O-Train Confederation Line 2018 SelTrac CBTC/R STO Paris Metro Line 13 – Ouragan 2012 SelTrac CBTC/R DTO SFMTA Muni Metro (Market Street Subway) 1997 SelTrac CBTC DTO Seoul Metropolitan Subway Shinbundang Line 2011 SelTrac CBTC/R UTO Shanghai Metro Line 6 2011 SelTrac CBTC/R STO Line 8 2011 SelTrac CBTC/R STO Line 9 2011 SelTrac CBTC/R STO Line 7 2010 SelTrac CBTC/R STO Line 11 2010 SelTrac CBTC/R STO Singapore MRT North South line 2017/2018 SelTrac CBTC/R UTO East West line 2017/2018 SelTrac CBTC/R UTO Tampa International Airport APM 1992 SelTrac CBTC UTO Toronto rapid transit 2008/10 SelTrac Speed/Signal Safeguard Line 3 Scarborough 1985 SelTrac CBTC DTO Toronto streetcar electronic track switching system Vancouver TransLink SkyTrain Vancouver SkyTrain - Expo Line 1985 SelTrac CBTC UTO Vancouver SkyTrain - Millennium Line 2002 SelTrac CBTC UTO Vancouver SkyTrain - Canada Line 2009 SelTrac CBTC UTO Vancouver SkyTrain - Evergreen Line 2016 SelTrac CBTC UTO Walt Disney World Monorail 1989 SelTrac ATP Disney/TGI Washington Dulles Airport AeroTrain APM 2009 SelTrac CBTC/R UTO Wuhan Metro Line 1 2004/10 SelTrac CBTC DTO
SelTrac Incidents:
2019 MTR Tsuen Wan Line CBTC accident - no fatalities but two MTR staff sent to hospital for observations 2017 Joo Koon rail accident - no fatalities but 38 injuries including 2 SMRT staff | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Null result**
Null result:
In science, a null result is a result without the expected content: that is, the proposed result is absent. It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis. In statistical hypothesis testing, a null result occurs when an experimental result is not significantly different from what is to be expected under the null hypothesis; its probability (under the null hypothesis) does not exceed the significance level, i.e., the threshold set prior to testing for rejection of the null hypothesis. The significance level varies, but common choices include 0.10, 0.05, and 0.01.As an example in physics, the results of the Michelson–Morley experiment were of this type, as it did not detect the expected velocity relative to the postulated luminiferous aether. This experiment's famous failed detection, commonly referred to as the null result, contributed to the development of special relativity. The experiment did appear to measure a non-zero "drift", but the value was far too small to account for the theoretically expected results; it is generally thought to be inside the noise level of the experiment.
Publishing bias:
Despite similar quality of execution and design, papers with statistically significant results are three times more likely to be published than those with null results. This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by data dredging.Many factors contribute to publication bias. For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis. Most commonly, investigators simply decline to submit results, leading to non-response bias. Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results.There are several scientific journals dedicated to the publication of negative or null results, including the following: Journal of Negative Results in Biomedicine (defunct) Journal of Pharmaceutical Negative Results Journal of Unsolved QuestionsWhile it is not exclusively dedicated to publishing negative results, BMC Research Notes also publishes negative results in the form of research or data notes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Thought and World**
Thought and World:
Thought and World: An Austere Portrayal of Truth, Reference, and Semantic Correspondence is a 2002 book by Christopher S. Hill in which he presents a theory of the content of semantic notions that are applied to thoughts.
Reception:
The book has been reviewed by Keith Simmons, Anil Gupta and Marian David. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**UTC+04:00**
UTC+04:00:
UTC+04:00 is an identifier for a time offset from UTC of +04:00. In ISO 8601, the associated time would be written as 2019-02-07T23:28:34+04:00. This time is used in:
As standard time (year-round):
Principal cities: Abu Dhabi, Dubai, Baku, Tbilisi, Yerevan, Samara, Muscat, Port Louis, Victoria, Saint-Denis, Stepanakert Europe Eastern Europe Russia – Samara TimeSouthern Federal District Astrakhan Oblast Volga Federal District Samara Oblast Saratov Oblast Udmurtia Ulyanovsk Oblast South Caucasus Armenia – Armenia Time (used DST in 1981–2012) Including Artsakh Azerbaijan – Azerbaijan Time (used DST in 1981–2016) Georgia – Georgia Time Except Abkhazia and South Ossetia Georgia moved from zone UTC+04:00 to UTC+03:00 on June 27, 2004, then back to UTC+04:00 on March 27, 2005.
As standard time (year-round):
Asia Middle East Oman – Time in Oman United Arab Emirates – United Arab Emirates Standard Time Africa France French Southern and Antarctic Lands Crozet Islands Scattered Islands in the Indian Ocean Glorioso Islands and Tromelin Island Réunion Mauritius – Mauritius Time Mauritius tried DST in 2008 but decided not to continue Seychelles – Seychelles Time
Discrepancies between official UTC+04:00 and geographical UTC+04:00:
Areas in UTC+04:00 longitudes using other time zones Using UTC+03:00 Yemen Socotra, the largest island in the Socotra Archipelago The easternmost part of Al-Mahrah Saudi Arabia The easternmost part of Syarqiyah Russia Most of Franz Josef Land, Yuzhny Island, and most of Severny Island (with an exception of the very east) Some parts of the Russian mainland (Komi Republic, Nenets Autonomous Okrug, east of Kirov Oblast and Tatarstan)Using UTC+03:30 Most parts of IranUsing UTC+04:30 Western parts of AfghanistanUsing UTC+05:00 Turkmenistan Kazakhstan Aktobe Kyzylorda Parts of Mangystau, Atyrau, and West Kazakhstan Uzbekistan Most parts of the country, including Samarkand Pakistan Western parts, including Karachi Russia Bashkortostan, Orenburg Oblast, Perm Krai, most parts of the Ural Federal DistrictUsing UTC+06:00 Kazakhstan Kostanay A smaller parts of Turkistan Western parts of Karaganda, Akmola, and North Kazakhstan Areas outside UTC+04:00 longitudes using UTC+04:00 time Areas between 37°30' E and 52°30' E ("physical" UTC+03:00) Caucasus region: Georgia, excluding Abkhazia and South Ossetia Armenia and Artsakh Azerbaijan Russia, with parts of its territories: Astrakhan, Samara, Saratov and Ulyanovsk (with an exception of the very east) Western half of Udmurtia United Arab Emirates The westernmost region of the Emirate of Abu Dhabi Seychelles Aldabra Group Cosmoledo Atoll Farquhar Group French Southern and Antarctic Lands Crozet Islands | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Corel Photo House**
Corel Photo House:
Corel Photo House is a discontinued raster graphics editor, replaced by Corel Photo-Paint. Corel Photo House was sometimes distributed free with image scanners such as the HP Scanjet. Corel Photo House saved images in the proprietary CPS image file format, which is however not supported by Paint Shop Pro or corel Photo-Paint. Corel Photo House was a photo-editing and bitmap creation program that makes it easy for you to touch up photographs, add text and special effects, or create Bitmap images. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Entrance**
Entrance:
Entrance generally refers to the place of entering like a gate, door, or road or the permission to do so.
Entrance:
Entrance may also refer to: Entrance (album), a 1970 album by Edgar Winter Entrance (display manager), a login manager for the X window manager Entrance (liturgical), a kind of liturgical procession in the Eastern Orthodox tradition Entrance (musician), born Guy Blakeslee Entrance (film), a 2011 film Entrance, Alberta, a community in Canada The Entrance, New South Wales, a suburb in Central Coast, New South Wales, Australia "Entrance" (Dimmu Borgir song), from the 1997 album Enthrone Darkness Triumphant Entry (cards), a card that wins a trick to which another player made the lead, as in the card game contract bridge N-Trance, a British electronic music group formed in 1990 University and college admissions Entrance Hall Entryway | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Asymptomatic inflammatory prostatitis**
Asymptomatic inflammatory prostatitis:
Asymptomatic inflammatory prostatitis is a painless inflammation of the prostate gland where there is no evidence of infection. It should be distinguished from the other categories of prostatitis characterised by either pelvic pain or evidence of infection, such as chronic bacterial prostatitis, acute bacterial prostatitis and chronic pelvic pain syndrome (CPPS). It is a common finding in men with benign prostatic hyperplasia.
Signs and symptoms:
These patients have no history of genitourinary pain complaints, but leukocytosis is noted, usually during evaluation for other conditions.
Diagnosis:
Diagnosis is through tests of semen, expressed prostatic secretion (EPS) or prostate tissue that reveal inflammation in the absence of symptoms.
Treatment:
No treatment required. It is standard practice for men with infertility and category IV prostatitis to be given a trial of antibiotics and/or anti-inflammatories, although evidence of efficacy are weak. Since signs of asymptomatic prostatic inflammation may sometimes be associated with prostate cancer, this can be addressed by tests that assess the ratio of free-to-total PSA. The results of these tests were significantly different in prostate cancer and category IV prostatitis in one study. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**British Standard Fine**
British Standard Fine:
British Standard Fine (BSF) is a screw thread form, as a fine-pitch alternative to British Standard Whitworth (BSW) thread.
It was used for steel bolts and nuts on and in much of Britain's machinery, including cars, prior to adoption of Unified, and later Metric, standards. For highly stressed conditions, especially in motorcycles, a finer thread, British Standard Cycle (BSC), was used as well.
British Standard Fine:
BSF was developed by R. E. B. Crompton, and his assistant George Field. BSF threads use the 55 degree Whitworth thread form. It was introduced by the British Engineering Standards Association in 1908.The table provides BSF sizes, the threads per inch and spanner jaw sizes. The BSC column indicates where BSF and BSC threads match. The table shows suitable tapping drill sizes. Uncommon sizes are shown in italics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Possibility theory**
Possibility theory:
Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessary to necessary, respectively. Professor Lotfi Zadeh first introduced possibility theory in 1978 as an extension of his theory of fuzzy sets and fuzzy logic. Didier Dubois and Henri Prade further contributed to its development. Earlier, in the 1950s, economist G. L. S. Shackle proposed the min/max algebra to describe degrees of potential surprise.
Formalization of possibility:
For simplicity, assume that the universe of discourse Ω is a finite set. A possibility measure is a function pos from 2Ω to [0, 1] such that: Axiom 1: pos (∅)=0 Axiom 2: pos (Ω)=1 Axiom 3: pos max pos pos (V)) for any disjoint subsets U and V .It follows that, like probability on finite probability spaces, the possibility measure is determined by its behavior on singletons: pos max pos ({ω}).
Formalization of possibility:
Axiom 1 can be interpreted as the assumption that Ω is an exhaustive description of future states of the world, because it means that no belief weight is given to elements outside Ω.
Axiom 2 could be interpreted as the assumption that the evidence from which pos was constructed is free of any contradiction. Technically, it implies that there is at least one element in Ω with possibility 1.
Formalization of possibility:
Axiom 3 corresponds to the additivity axiom in probabilities. However there is an important practical difference. Possibility theory is computationally more convenient because Axioms 1–3 imply that: pos max pos pos (V)) for any subsets U and V .Because one can know the possibility of the union from the possibility of each component, it can be said that possibility is compositional with respect to the union operator. Note however that it is not compositional with respect to the intersection operator. Generally: pos min pos pos max pos pos (V)).
Formalization of possibility:
When Ω is not finite, Axiom 3 can be replaced by: For all index sets I , if the subsets Ui,i∈I are pairwise disjoint, pos sup pos (Ui).
Necessity:
Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, the possibility and the necessity of the event. For any set U , the necessity measure is defined by nec pos (U¯) .In the above formula, U¯ denotes the complement of U , that is the elements of Ω that do not belong to U . It is straightforward to show that: nec pos (U) for any U and that: nec min nec nec (V)) .Note that contrary to probability theory, possibility is not self-dual. That is, for any event U , we only have the inequality: pos pos (U¯)≥1 However, the following duality rule holds: For any event U , either pos (U)=1 , or nec (U)=0 Accordingly, beliefs about an event can be represented by a number and a bit.
Interpretation:
There are four cases that can be interpreted as follows: nec (U)=1 means that U is necessary. U is certainly true. It implies that pos (U)=1 pos (U)=0 means that U is impossible. U is certainly false. It implies that nec (U)=0 pos (U)=1 means that U is possible. I would not be surprised at all if U occurs. It leaves nec (U) unconstrained.
Interpretation:
nec (U)=0 means that U is unnecessary. I would not be surprised at all if U does not occur. It leaves pos (U) unconstrained.
The intersection of the last two cases is nec (U)=0 and pos (U)=1 meaning that I believe nothing at all about U . Because it allows for indeterminacy like this, possibility theory relates to the graduation of a many-valued logic, such as intuitionistic logic, rather than the classical two-valued logic.
Note that unlike possibility, fuzzy logic is compositional with respect to both the union and the intersection operator. The relationship with fuzzy theory can be explained with the following classic example.
Fuzzy logic: When a bottle is half full, it can be said that the level of truth of the proposition "The bottle is full" is 0.5. The word "full" is seen as a fuzzy predicate describing the amount of liquid in the bottle.
Interpretation:
Possibility theory: There is one bottle, either completely full or totally empty. The proposition "the possibility level that the bottle is full is 0.5" describes a degree of belief. One way to interpret 0.5 in that proposition is to define its meaning as: I am ready to bet that it's empty as long as the odds are even (1:1) or better, and I would not bet at any rate that it's full.
Possibility theory as an imprecise probability theory:
There is an extensive formal correspondence between probability and possibility theories, where the addition operator corresponds to the maximum operator.
A possibility measure can be seen as a consonant plausibility measure in the Dempster–Shafer theory of evidence. The operators of possibility theory can be seen as a hyper-cautious version of the operators of the transferable belief model, a modern development of the theory of evidence.
Possibility can be seen as an upper probability: any possibility distribution defines a unique credal set set of admissible probability distributions by pos (S)}.
This allows one to study possibility theory using the tools of imprecise probabilities.
Necessity logic:
We call generalized possibility every function satisfying Axiom 1 and Axiom 3. We call generalized necessity the dual of a generalized possibility. The generalized necessities are related to a very simple and interesting fuzzy logic called necessity logic. In the deduction apparatus of necessity logic the logical axioms are the usual classical tautologies. Also, there is only a fuzzy inference rule extending the usual modus ponens. Such a rule says that if α and α → β are proved at degree λ and μ, respectively, then we can assert β at degree min{λ,μ}. It is easy to see that the theories of such a logic are the generalized necessities and that the completely consistent theories coincide with the necessities (see for example Gerla 2001). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Type II sensory fiber**
Type II sensory fiber:
Type II sensory fiber (group Aβ) is a type of sensory fiber, the second of the two main groups of touch receptors. The responses of different type Aβ fibers to these stimuli can be subdivided based on their adaptation properties, traditionally into rapidly adapting (RA) or slowly adapting (SA) neurons. Type II sensory fibers are slowly-adapting (SA), meaning that even when there is no change in touch, they keep respond to stimuli and fire action potentials. In the body, Type II sensory fibers belong to pseudounipolar neurons. The most notable example are neurons with Merkel cell-neurite complexes on their dendrites (sense static touch) and Ruffini endings (sense stretch on the skin and over-extension inside joints). Under pathological conditions they may become hyper-excitable leading to stimuli that would usually elicit sensations of tactile touch causing pain. These changes are in part induced by PGE2 which is produced by COX1, and type II fibers with free nerve endings are likely to be the subdivision of fibers that carry out this function.Type II sensory fiber (group Aα) is another type of sensory fiber, which participate in the sensation of body position (proprioception). In each muscle, we have 10-100 tiny muscle-like pockets called muscle spindles. The type II fibers (aka secondary fibers) connect to nuclear chain fibers and static nuclear bag fibers in muscle spindles, but not to dynamic nuclear bag fibers. The typical innervation to muscle spindles consists of one type Ia fiber and 2 type II fibers. The type Ia fiber has "anulospiral" endings around the middle parts of the intrafusal fibers compared to type II fibers that have "flower spray" endings which may be spray shaped or annular, spreading in narrow bands on both sides of the chain or bag fiber. It is thought that the Ia fibers signal the degree of change in muscle movement, and the type II fibers signal the length of the muscle (which is later used for forming the perception of the body in space). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tau Arietis**
Tau Arietis:
The Bayer designation Tau Arietis (τ Ari, τ Arietis) is shared by two star systems, in the constellation Aries: τ1 Arietis τ2 ArietisThey are separated by 0.54°. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nichols radiometer**
Nichols radiometer:
A Nichols radiometer was the apparatus used by Ernest Fox Nichols and Gordon Ferrie Hull in 1901 for the measurement of radiation pressure. It consisted of a pair of small silvered glass mirrors suspended in the manner of a torsion balance by a fine quartz fibre within an enclosure in which the air pressure could be regulated. The torsion head to which the fiber was attached could be turned from the outside using a magnet. A beam of light was directed first on one mirror and then on the other, and the opposite deflections observed with mirror and scale. By turning the mirror system around to receive the light on the unsilvered side, the influence of the air in the enclosure could be ascertained. This influence was found to be of almost negligible value at an air pressure of about 16 mmHg (2.1 kPa). The radiant energy of the incident beam was deduced from its heating effect upon a small blackened silver disk, which was found to be more reliable than the bolometer when it was first used. With this apparatus, the experimenters were able to obtain an agreement between observed and computed radiation pressures within about 0.6%. The original apparatus is at the Smithsonian Institution.This apparatus is sometimes confused with the Crookes radiometer of 1873. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**SNDCP**
SNDCP:
SNDCP, Sub Network Dependent Convergence Protocol, is part of layer 3 of a GPRS protocol specification. SNDCP interfaces to the Internet Protocol at the top, and to the GPRS-specific Logical Link Control (LLC) protocol at the bottom.
In the spirit of the GPRS specifications, there can be many implementations of SNDCP, supporting protocols such as X.25. However, in reality, IP (Internet Protocol) is such an overwhelming standard that X.25 has become irrelevant for modern applications, so all implementations of SNDCP for GPRS only support IP as the payload type.
The SNDCP layer is relevant to the protocol stack of the mobile station and that of the SGSN, and works when a PDP Context is established and the quality of service has been negotiated.
Services offered by SNDCP:
The SNDCP layer primarily converts, encapsulates and segments external network formats (like Internet Protocol Datagrams) into sub-network formats (called SNPDUs). It also performs compression of NPDUs to make for efficient Data transmission. It performs the multiple PDP Context PDU transfers and it also ensures that NPDUs from each PDP Context are transmitted to the LLC layer in sufficient time to maintain the QoS. SNDCP provides services to the higher layers which may include connectionless and connection-oriented mode, compression, multiplexing and segmentation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dichloropane**
Dichloropane:
Dichloropane ((−)-2β-Carbomethoxy-3β-(3,4-dichlorophenyl)tropane, RTI-111, O-401) is a stimulant of the phenyltropane class that acts as a serotonin–norepinephrine–dopamine reuptake inhibitor (SNDRI) with IC50 values of 3.13, 0.79 and 18 nM, respectively. In animal studies, dichloropane had a slower onset and longer duration of action compared to cocaine.Methylecgonidine is the direct precursor to this compound.
Trans -CO2Me group:
The thermodynamic isomer with a trans -CO2Me group is still active. This isomer was used by Neurosearch to make three different phenyltropanes which were tested in clinical trials.
Tesofensine Brasofensine NS-2359 (GSK-372,475) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sex differences in schizophrenia**
Sex differences in schizophrenia:
Sex differences in schizophrenia are widely reported. Men and women exhibit different rates of incidence and prevalence, age at onset, symptom expression, course of illness, and response to treatment. Reviews of the literature suggest that understanding the implications of sex differences on schizophrenia may help inform individualized treatment and positively affect outcomes.
Incidence and prevalence:
For both men and women, incidence of schizophrenia onset peaks at multiple points across the lifespan. For men, the highest frequency of incidence onset occurs in the early twenties and there is evidence of a second peak in the mid-thirties. For women, there is a similar pattern with peaks in the early twenties and middle age. Studies have also demonstrated a tertiary peak for women in the early sixties. Men have higher frequency rates of onset than women from the early twenties to middle age, and women have higher frequency rates of onset starting in late middle age. 2005 and 2008 studies of prevalence rates of schizophrenia estimate that the lifetime likelihood of developing the disorder is 0.3–0.7%, and did not find evidence of sex differences. However, other studies have found a higher prevalence and severity in males than females.
Clinical presentation:
Symptom expression systematically differs between men and women. Women are more likely to experience high levels of depressive symptoms (i.e., low mood, anhedonia, fatigue) at illness onset and over the course of illness. Men are more likely to experience more negative symptoms than women at illness onset. There is conflicting evidence related to sex differences in the expression of positive symptoms. Some studies have found that women are more likely to experience positive symptoms. Other studies have found no significant sex differences in the expression of positive symptoms. Younger age of onset is also related to earlier hospitalizations in men and more acute symptom severity in women.Ties have been found between schizophrenic women's estrogen levels and their level of schizophrenia symptoms. Such women have sometimes been found to benefit from hormonal treatment. Menstrual psychosis and postpartum psychosis may in some cases be linked to an underlying schizophrenic condition.
Differences course of illness and treatment outcomes:
Course of illness and treatment outcomes Longitudinal studies have found evidence of sex differences in presence of psychosis, global outcome, and recovery across periods of 15–20 years. Several studies have demonstrated that women with schizophrenia are more likely to exhibit significantly greater reduction in psychotic symptoms, as well as better cognitive and global functioning relative to men. Additionally, studies have found that women are more likely to experience a period of recovery across the lifespan than men. Further, there is consistent evidence of higher mortality rates, suicide attempts and completions, homelessness, poorer family and social support in men compared to women. It is currently unclear the extent to which these observed differences can be attributed to age of onset. Some studies demonstrate that age at illness onset likely contributes to observed sex differences in course of illness and treatment outcomes. Increased negative and cognitive symptoms and poorer overall treatment outcomes are both related to younger age at onset, while fewer negative and cognitive symptoms are associated with older age at onset. These findings are consistent with the patterns of symptom expression observed in men and women and the relative age of onset for each gender. It is possible that men are more likely to experience poorer overall outcomes than women because of the relationship between younger age at onset and symptom severity. However, some longitudinal studies have found that sex is a unique predictor of functional outcome over and above the effects of age.
Differences course of illness and treatment outcomes:
Differences in response to antipsychotic medications Clinical trials examining sex differences in the efficacy of atypical antipsychotic medications found greater rates of symptom reduction in women compared to men. However, women are at a greater risk for experiencing weight gain and developing metabolic syndrome as a result of antipsychotic medication use. It is possible, however, that these differences in treatment response may be confounded by sex differences in clinical symptom severity and age at illness onset described above.
Factors contributing to sex differences:
Biological factors The steroids and hormones associated with sex differentiation during fetal development have critical effects on neuronal development in humans, and there is evidence that these hormones have implications for sex differences in brain abnormalities observed in adults with schizophrenia. MRI studies have revealed more severe brain damage in men diagnosed with schizophrenia than women. Specifically, larger lateral and third ventricles and reduced volumes of critical regions such as the hippocampus, amygdala, and prefrontal cortical regions have been observed in men. These brain abnormalities likely contribute to the observed short-term and long-term memory deficits in men diagnosed with schizophrenia. It has been hypothesized that estrogen may serve a protective role in female development, buffering against the development of pervasive damage to this region. Further support for this hypothesis derives from the observation of a third peak of onset for women after menopause, which is associated with a reduction of estrogen, and the increased response to treatment in pre-menopausal women compared to post-menopausal women. Additionally, there is evidence that estradiol may be an effective adjunct to antipsychotic medication in reducing psychotic symptoms.
Factors contributing to sex differences:
Social and environmental factors Social cognition and social functioning Premorbid social functioning and social cognition, robust predictors of relapse in this population, differ significantly between men and women. Men have poorer overall premorbid social functioning and social cognition, which is associated with higher rates of isolation, loneliness, and lower quality of life. Social cognitive and functional deficits are also related to the increased expression of negative symptoms observed in men. Additionally, these factors are also associated with reduced social network size and lower marriage rates in men with schizophrenia compared to women. Younger age at onset in men may also negatively impact community reintegration following the illness onset by delaying the development of life skills necessary to develop strong social support networks and foster self-perceptions of efficacy and agency.
Factors contributing to sex differences:
Substance abuse and dependence Sex-related differences in substance use and dependence have been observed in individuals with schizophrenia and those at risk for developing the illness. In early adolescence, sex-related differences in cannabis use have been observed, with males using more heavily than females in the general population and in those at risk for developing schizophrenia. There is evidence that these differences could in part be attributed to the predictive relationship between levels of testosterone in early adolescence and later cannabis use and dependence. Frequent cannabis use in early adolescence may be a risk factor for developing schizophrenia in men. There is some evidence that heavy, early cannabis use may be associated with impeded cortical maturation in males at a high risk for developing schizophrenia, potentially accelerating the course of illness in these individuals.Substance abuse is also highly correlated with poorer functional outcomes and can significantly influence the course of illness. Current research estimates that 36% of men have a history of illicit substance use versus 16% of women. Nicotine dependence is also highly prevalent in individuals with schizophrenia. An estimated 80% of individuals with schizophrenia smoke cigarettes compared to 20% of the general population. Men with schizophrenia are more likely to start smoking than women, but social factors associated with mental illness contributing to increased rate of smoking in both genders. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Fermi–Walker transport**
Fermi–Walker transport:
Fermi–Walker transport is a process in general relativity used to define a coordinate system or reference frame such that all curvature in the frame is due to the presence of mass/energy density and not to arbitrary spin or rotation of the frame.
Fermi–Walker differentiation:
In the theory of Lorentzian manifolds, Fermi–Walker differentiation is a generalization of covariant differentiation. In general relativity, Fermi–Walker derivatives of the spacelike vector fields in a frame field, taken with respect to the timelike unit vector field in the frame field, are used to define non-inertial and non-rotating frames, by stipulating that the Fermi–Walker derivatives should vanish. In the special case of inertial frames, the Fermi–Walker derivatives reduce to covariant derivatives.
Fermi–Walker differentiation:
With a (−+++) sign convention, this is defined for a vector field X along a curve γ(s) :DFXds=DXds−(X,DVds)V+(X,V)DVds, where V is four-velocity, D is the covariant derivative, and (⋅,⋅) is the scalar product. If DFXds=0, then the vector field X is Fermi–Walker transported along the curve. Vectors perpendicular to the space of four-velocities in Minkowski spacetime, e.g., polarization vectors, under Fermi–Walker transport experience Thomas precession.
Fermi–Walker differentiation:
Using the Fermi derivative, the Bargmann–Michel–Telegdi equation for spin precession of electron in an external electromagnetic field can be written as follows: DFaτds=2μ(Fτλ−uτuσFσλ)aλ, where aτ and μ are polarization four-vector and magnetic moment, uτ is four-velocity of electron, aτaτ=−uτuτ=−1 , uτaτ=0 , and Fτσ is the electromagnetic field strength tensor. The right side describes Larmor precession.
Co-moving coordinate systems:
A coordinate system co-moving with a particle can be defined. If we take the unit vector vμ as defining an axis in the co-moving coordinate system, then any system transforming with proper time is said to be undergoing Fermi–Walker transport.
Generalised Fermi–Walker differentiation:
Fermi–Walker differentiation can be extended for any V , this is defined for a vector field X along a curve γ(s) :DXds=DXds+(X,DVds)V(V,V)−(X,V)(V,V)DVds−(V,DVds)(X,V)(V,V)2V, where (V,V)≠0 . If (V,V)=−1 , then (V,DVds)=12dds(V,V)=0, and DXds=DFXds. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lava tree mold**
Lava tree mold:
A lava tree mold, sometimes erroneously called a lava tree cast, is a hollow lava formation that formed around a tree trunk. They are created when lava flows through an area of trees, coating their exterior. The lava cools just enough to create a solid crust around the trunk, but the tree inside burns away leaving a cavity. Molds of trees may be vertical or horizontal. In many cases, mold formation requires slow moving lava, as well as enough time for the mold to chill.
Methane explosions:
A unique phenomenon may occur during the formation of vertical tree molds. As the lava-encased tree burns away, the roots are heated up and generate a "producer" gas, such as methane. If the roots penetrate into a cavity, such as a lava tube or tumulus crack, it may come into contact with oxygen. Because there is a source of heat already present, the charred root or the lava itself, a methane explosion may follow if the oxygen and producer gas mixture is between 5 and 15% (volume-percent fuel). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Unimodular matrix**
Unimodular matrix:
In mathematics, a unimodular matrix M is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse (these are equivalent under Cramer's rule). Thus every equation Mx = b, where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over Z , which is denoted GL n(Z)
Examples of unimodular matrices:
Unimodular matrices form a subgroup of the general linear group under matrix multiplication, i.e. the following matrices are unimodular: Identity matrix The inverse of a unimodular matrix The product of two unimodular matricesOther examples include: Pascal matrices Permutation matrices the three transformation matrices in the ternary tree of primitive Pythagorean triples Certain transformation matrices for rotation, shearing (both with determinant 1) and reflection (determinant −1).
Examples of unimodular matrices:
The unimodular matrix used (possibly implicitly) in lattice reduction and in the Hermite normal form of matrices.
The Kronecker product of two unimodular matrices is also unimodular. This follows since det det det B)p, where p and q are the dimensions of A and B, respectively.
Total unimodularity:
A totally unimodular matrix (TU matrix) is a matrix for which every square non-singular submatrix is unimodular. Equivalently, every square submatrix has determinant 0, +1 or −1. A totally unimodular matrix need not be square itself. From the definition it follows that any submatrix of a totally unimodular matrix is itself totally unimodular (TU). Furthermore it follows that any TU matrix has only 0, +1 or −1 entries. The converse is not true, i.e., a matrix with only 0, +1 or −1 entries is not necessarily unimodular. A matrix is TU if and only if its transpose is TU.
Total unimodularity:
Totally unimodular matrices are extremely important in polyhedral combinatorics and combinatorial optimization since they give a quick way to verify that a linear program is integral (has an integral optimum, when any optimum exists). Specifically, if A is TU and b is integral, then linear programs of forms like min cx∣Ax≥b,x≥0} or max cx∣Ax≤b} have integral optima, for any c. Hence if A is totally unimodular and b is integral, every extreme point of the feasible region (e.g. {x∣Ax≥b} ) is integral and thus the feasible region is an integral polyhedron.
Total unimodularity:
Common totally unimodular matrices 1. The unoriented incidence matrix of a bipartite graph, which is the coefficient matrix for bipartite matching, is totally unimodular (TU). (The unoriented incidence matrix of a non-bipartite graph is not TU.) More generally, in the appendix to a paper by Heller and Tompkins, A.J. Hoffman and D. Gale prove the following. Let A be an m by n matrix whose rows can be partitioned into two disjoint sets B and C . Then the following four conditions together are sufficient for A to be totally unimodular: Every entry in A is 0, +1, or −1; Every column of A contains at most two non-zero (i.e., +1 or −1) entries; If two non-zero entries in a column of A have the same sign, then the row of one is in B , and the other in C If two non-zero entries in a column of A have opposite signs, then the rows of both are in B , or both in C .It was realized later that these conditions define an incidence matrix of a balanced signed graph; thus, this example says that the incidence matrix of a signed graph is totally unimodular if the signed graph is balanced. The converse is valid for signed graphs without half edges (this generalizes the property of the unoriented incidence matrix of a graph).2. The constraints of maximum flow and minimum cost flow problems yield a coefficient matrix with these properties (and with empty C). Thus, such network flow problems with bounded integer capacities have an integral optimal value. Note that this does not apply to multi-commodity flow problems, in which it is possible to have fractional optimal value even with bounded integer capacities.
Total unimodularity:
3. The consecutive-ones property: if A is (or can be permuted into) a 0-1 matrix in which for every row, the 1s appear consecutively, then A is TU. (The same holds for columns since the transpose of a TU matrix is also TU.) 4. Every network matrix is TU. The rows of a network matrix correspond to a tree T = (V, R), each of whose arcs has an arbitrary orientation (it is not necessary that there exist a root vertex r such that the tree is "rooted into r" or "out of r").The columns correspond to another set C of arcs on the same vertex set V. To compute the entry at row R and column C = st, look at the s-to-t path P in T; then the entry is: +1 if arc R appears forward in P, −1 if arc R appears backwards in P, 0 if arc R does not appear in P.See more in Schrijver (2003).
Total unimodularity:
5. Ghouila-Houri showed that a matrix is TU iff for every subset R of rows, there is an assignment s:R→±1 of signs to rows so that the signed sum ∑r∈Rs(r)r (which is a row vector of the same width as the matrix) has all its entries in {0,±1} (i.e. the row-submatrix has discrepancy at most one). This and several other if-and-only-if characterizations are proven in Schrijver (1998).
Total unimodularity:
6.
Hoffman and Kruskal proved the following theorem. Suppose G is a directed graph without 2-dicycles, P is the set of all dipaths in G , and A is the 0-1 incidence matrix of V(G) versus P . Then A is totally unimodular if and only if every simple arbitrarily-oriented cycle in G consists of alternating forwards and backwards arcs.
Total unimodularity:
7. Suppose a matrix has 0-( ± 1) entries and in each column, the entries are non-decreasing from top to bottom (so all −1s are on top, then 0s, then 1s are on the bottom). Fujishige showed that the matrix is TU iff every 2-by-2 submatrix has determinant in 0,±1 8. Seymour (1980) proved a full characterization of all TU matrices, which we describe here only informally. Seymour's theorem is that a matrix is TU if and only if it is a certain natural combination of some network matrices and some copies of a particular 5-by-5 TU matrix.
Total unimodularity:
Concrete examples 1. The following matrix is totally unimodular: A=[−1−1000+1+10−1−1000+1+10−10000+1+1−1].
This matrix arises as the coefficient matrix of the constraints in the linear programming formulation of the maximum flow problem on the following network: 2. Any matrix of the form A=[⋮⋮⋯+1⋯+1⋯⋮⋮⋯+1⋯−1⋯⋮⋮].
is not totally unimodular, since it has a square submatrix of determinant −2.
Abstract linear algebra:
Abstract linear algebra considers matrices with entries from any commutative ring R , not limited to the integers. In this context, a unimodular matrix is one that is invertible over the ring; equivalently, whose determinant is a unit. This group is denoted GL n(R) . A rectangular k -by- m matrix is said to be unimodular if it can be extended with m−k rows in Rm to a unimodular square matrix.Over a field, unimodular has the same meaning as non-singular. Unimodular here refers to matrices with coefficients in some ring (often the integers) which are invertible over that ring, and one uses non-singular to mean matrices that are invertible over the field. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Season extension**
Season extension:
Season extension in agriculture is any method that allows a crop to be grown beyond its normal outdoor growing season and harvesting time frame, or the extra time thus achieved. To extend the growing season into the colder months, one can use unheated techniques such as floating row covers, low tunnels, caterpillar tunnels, or hoophouses. However, even if colder temperatures are mitigated, most crops will stop growing when the days become shorter than 10 hours, and resume after winter as the daylight increases above 10 hours. A hothouse — a greenhouse which is heated and illuminated — creates an environment where plants are fooled into thinking it is their normal growing season. Though this is a form of season extension for the grower, it is not the usual meaning of the term.: 2, 43–44 Season extension can apply to other climates, where conditions other than cold and shortened period of sunlight end the growing year (e.g. a rainy season).
Structures:
Unheated greenhouses (also known as cold houses) offer protection from the weather, such as sub-optimal temperatures, freezing or drying winds, damaging wind gusts, frost, snow and ice. Unheated greenhouses can extend the growing season of cold hardy vegetables well into the fall and sometimes even through winter until spring. Sometimes supplementary heating is appropriate when temperatures inside the greenhouse drop below 32 degrees Fahrenheit.: 2–12 Passive heated or low-energy greenhouses: Using principles of passive solar building design and including thermal mass will help keep an otherwise unheated greenhouse several degrees warmer at night and on overcast days. Other systems such as ground-coupled heat exchangers, thermal chimneys, thermosiphons, or "climate batteries" can also be used to take ground-stored heat and use it to help heat a greenhouse.Polytunnels (hoop houses): Whereas a greenhouse has a frame and is glazed with glass or stiff polycarbonate sheets, polytunnels are built with thin polyethylene plastic sheeting stretched over curved frameworks, often extending as long "tunnels". Low tunnels are short enough that a person cannot walk inside them, perhaps 2 to 4 feet tall, and the plastic must be lifted to access the plants. High tunnels are commercial-sized buildings, tall enough to walk through without bending and sometimes tall enough to operate tractors inside. Sometimes polytunnels are built with two layers of plastic sheeting and air blown in between them; this increases the insulation factor, but also cuts down on the amount of sunlight reaching the plants.: 55–58, 125–130 Row covers are lightweight fabrics placed over plants to retain heat and can provide several degrees of frost protection. Row covers, being fabric, allow rain to permeate the material, and also allow plants to transpire without holding in the moisture (as happens under plastic sheeting). Row cover material can be laid directly onto the crop (floating row covers), or laid over a framework of hoops or wires. Row covers can be set up outside of any protective structure or placed over crops within high tunnels or greenhouses. In its simplest function, it allows a light frost to form on the cover instead of on the leaves beneath. Outside row covers must be clipped or pinned in place or weighted down on the edges. Inside row covers may be draped to the ground without further attachment.: 58–66 Cold frames are transparent-roofed enclosures, built low to the ground, used to protect plants from cold weather. Cold frames are found in home gardens and in vegetable farming. They are most often used for growing seedlings that are later transplanted into open ground. A typical cold frame has traditionally been a rectangle of framing lumber with an old window placed over it. Since the advent of plastic sheeting, it is often used instead of old windows.Temporary coverings: In smaller gardens almost any type of cover, including glass cloches, newspaper cones, baskets, miscellaneous bits of plastic, and mulches such as hay, leaves, or straw can be used as frost-protection that is pulled on and off each day when frost is likely to occur overnight.
Other methods:
Hotbeds: a mass of hot compost is used for the heat it gives off to warm a nearby plant. Typically a few centimetres of soil are placed on top of the compost mass, and the plant grows there, above the rising heat.Mulches: many a material placed on the soil around plants will help retain heat. Organic mulches include straw, compost, etc. Synthetic mulches, typically, plastic sheeting with slits through which plants grow, is used extensively in large-scale vegetable growing. When the plastic is black, its color may absorb more solar heat, but if the plastic is clear, it may provide a greenhouse effect; both concepts are touted in discussions of mulching, usually without citations of any field trials that might clarify which to choose. Organic mulches, in addition to retaining heat by insulating, can potentially also add some heat from their decomposition, although they must be properly chosen, as factors such as thermal or chemical "burning" (excess heat, acidity, or both) and coliform bacteria accompany animal manure used as row-crop mulch. One principle involved is to prefer aged compost over fresh compost for this purpose, as its earlier predigestion by soil microbes ends the early phase of intense heat, low pH, and gut bacteria dominance but still leaves a bit more exothermic potential available.Raised beds: beds where the soil has been loosened and piled a few inches to more than a foot above the surrounding area heat more quickly in spring, allowing earlier planting. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Adventures with Purpose**
Adventures with Purpose:
Adventures with Purpose is a group of scuba divers who use sonar to locate missing persons and their vehicles in waterbodies. Originally focused on clean-up, they turned their focus to missing persons cold cases. The group documents their efforts on their YouTube channel.
Background:
Adventures with Purpose (AWP) was founded by Oregon-based Sam Ginn and Jared Leisek in 2019. They began as an environmental cleanup agency, removing cars that were polluting waterways. Doug Bishop, a diver and manager of a towing company, also joined the group. After twice finding cars with missing people in them, they determined a need to look for people who have gone missing in or with their vehicles. The channel receives tip-offs and requests from the public through their social media accounts. They do not pursue rewards from family or charge the families or police involved, but will not reject rewards if given. Instead, AWP funds its searches through video views, subscribers, donations, and merchandise sales. In 2020, AWP threatened to sue for a $100,000 reward pledged by five anonymous donors for the discovery of Ethan Kazmerzak, who had disappeared in 2013. The Kazmerzak family donated an undisclosed amount.In October 2022, the team had six members. The following month, Leisek was accused of raping a 9-year-old child at age 16–17 in Utah in 1992. Several team members, including Bishop, diver Nick Rinn, and lead videographer Josh Cantu subsequently left the team. On January 5, 2023, Leisek was charged in Sanpete County, Utah.On January 7, 2023, Adventures with Purpose announced they are on a 3-month tour with a new team of divers and filmmakers.
Search process:
At the identified waterbody, the team traverse the waters in small inflatable boats, scanning the bottom of the waterbody using sonar. Upon identifying areas of interest, they circle the area for further identification on their sonar displays before using a heavy-duty magnet to attach their line to the sunken vehicle. They mark the location with a buoy and then use divers to make a visual identification of the vehicle, retrieve a license plate, search for bodies, and prepare the vehicle and its contents for retrieval by police.
Cases:
The team usually deals with cold cases, however they have also volunteered for searches in recent cases; an example would be searching on August 22, 2022, for Kiely Rodni, who went missing on August 6, 2022. Certain recent cases may be incidental while searching the waterbody for a cold case. As of June 2023, the group has solved 26 missing person cases. AWP also sometimes cooperates with other teams with the same purpose, such as Exploring with Nug and Chaos Divers. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Characteristics of common wasps and bees**
Characteristics of common wasps and bees:
While observers can easily confuse common wasps and bees at a distance or without close observation, there are many different characteristics of large bees and wasps that can be used to identify them. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scientific technique**
Scientific technique:
A scientific technique is any systematic way of obtaining information about a scientific nature or to obtain a desired material or product.
Scientific techniques can be divided in many different groups, e.g.: Preparative techniques Synthesis techniques, e.g. the use of Grignard reagents in organic chemistry Growth techniques, e.g. crystal growth or cell cultures in biology Purification techniques e.g. those in chemistry Measurement techniques Analysis techniques, e.g. ones that reveal atomic or molecular composition.
Characterization techniques, e.g. ones that measure a certain property of a material.
Scientific technique:
Imaging techniques, e.g. microscopyIn some cases these methods have evolved into instrumental techniques that require expensive equipment. This is particularly true in sciences like physics, chemistry, and astronomy. It is customary to abbreviate the names of techniques into acronyms, although this does not hold for all of them. Particularly the advent of the computer has led to a true proliferation in the number of techniques to the point that few scientists still have a good overview over all that is available. See, for example, the list of materials analysis methods and Category:Scientific techniques. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Erosion and tectonics**
Erosion and tectonics:
The interaction between erosion and tectonics has been a topic of debate since the early 1990s. While the tectonic effects on surface processes such as erosion have long been recognized (for example, river formation as a result of tectonic uplift), the opposite (erosional effects on tectonic activity) has only recently been addressed. The primary questions surrounding this topic are what types of interactions exist between erosion and tectonics and what are the implications of these interactions. While this is still a matter of debate, one thing is clear, Earth's landscape is a product of two factors: tectonics, which can create topography and maintain relief through surface and rock uplift, and climate, which mediates the erosional processes that wear away upland areas over time. The interaction of these processes can form, modify, or destroy geomorphic features on Earth's surface.
Tectonic processes:
The term tectonics refers to the study of Earth's surface structure and the ways in which it changes over time. Tectonic processes typically occur at plate boundaries which are one of three types: convergent boundaries, divergent boundaries, or transform boundaries. These processes form and modify the topography of the Earth's surface, effectively increasing relief through the mechanisms of isostatic uplift, crustal thickening, and deformation in the form of faulting and folding. Increased elevations, in relation to regional base levels, lead to steeper river channel gradients and an increase in orographically localized precipitation, ultimately resulting in drastically increased erosion rates. The topography, and general relief, of a given area determines the velocity at which surface runoff will flow, ultimately determining the potential erosive power of the runoff. Longer, steeper slopes are more prone to higher rates of erosion during periods of heavy rainfall than shorter, gradually sloping areas. Thus, large mountain ranges, and other areas of high relief, formed through tectonic uplift will have significantly higher rates of erosion. Additionally, tectonics can directly influence erosion rates on a short timescale, as is clear in the case of earthquakes, which can trigger landslides and weaken surrounding rock through seismic disturbances.
Tectonic processes:
While tectonic uplift in any case will lead to some form of increased elevation, thus higher rates of erosion, a primary focus is set on isostatic uplift as it provides a fundamental connection between the causes and effects of erosional-tectonic interactions.
Tectonic processes:
Isostatic uplift Understanding the principle of isostasy is a key element to understanding the interactions and feedbacks shared between erosion and tectonics. The principle of isostasy states that when free to move vertically, lithosphere floats at an appropriate level in the asthenosphere so that the pressure at a depth of compensation in the asthenosphere well below the base of the lithosphere is the same. Isostatic uplift is both a cause and an effect of erosion. When deformation occurs in the form of crustal thickening an isostatic response is induced causing the thickened crust to sink, and surrounding thinner crust to uplift. The resulting surface uplift leads to enhanced elevations, which in turn induces erosion. Alternatively, when a large amount of material is eroded away from the Earth's surface uplift occurs in order to maintain isostatic equilibrium. Because of isostasy, high erosion rates over significant horizontal areas can effectively suck up material from the lower crust and/or upper mantle. This process is known as isostatic rebound and is analogous to Earth's response following the removal of large glacial ice sheets.Isostatic uplift and corresponding erosion are responsible for the formation of regional-scale geologic features as well as localized structures. Two such examples include: Continental shields – Generally large areas of low relief (<100 m) in Earth's crust where Precambrian crystalline igneous and high-grade metamorphic rocks are exposed. Shields are considered tectonically stable areas in comparison to the activity occurring at their margins and the boundaries between plates, but their formation required large amounts of tectonic activity and erosion. Shields, along with stable platforms, are the basic tectonic components of continents, therefore understanding their development is critical to understanding the development of other surface features on Earth. Initially, a mountain belt is formed at a convergent plate margin. Transformation of a mountain belt to a shield is majorly dependent on two factors: (1) erosion of the mountain belt by running water and (2) isostatic adjustment resulting from the removal of surface rock due to erosion. This process of erosion followed by isostatic adjustment continues until the system is at isostatic equilibrium. At this point large-scale erosion can no longer occur because the surface has eroded down to nearly sea-level and uplift ceases due to the system's state of equilibrium.River anticlines – Geologic structures formed through the focused uplift of rock underlying confined areas of high erosion (i.e., rivers). Isostatic rebound resulting from the rapid removal of overlying rock, via erosion, causes the weakened areas of crustal rock to uplift from the apex of the river. In order for the development of these structures to occur the erosion rate of the river must exceed both the average erosional rate of the area, and the rate of uplift of the orogen. The two factors influencing the development of these structures are stream power of the associated river and the flexural rigidity of the crust in the area. The combination of increased stream power with decreased flexural rigidity results in the system's progression from a transverse anticline to a river anticline.
Tectonic processes:
Channel flow Channel flow describes the process through which hot, viscous crustal material flows horizontally between the upper crust and lithospheric mantle, and is eventually pushed to the surface. This model aims to explain features common to metamorphic hinterlands of some collisional orogens, most notably the Himalaya–Tibetan Plateau system. In mountainous areas with heavy rainfall (thus, high erosion rates) deeply incising rivers will form. As these rivers wear away the Earth's surface two things occur: (1) pressure is reduced on the underlying rocks effectively making them weaker and (2) the underlying material moves closer to the surface. This reduction of crustal strength, coupled with the erosional exhumation, allows for the diversion of the underlying channel flow toward Earth's surface.
Erosional processes:
The term erosion refers to the group of natural processes, including weathering, dissolution, abrasion, corrosion, and transportation, by which material is worn away from Earth's surface to be transported and deposited in other locations.
Erosional processes:
Differential erosion – Erosion that occurs at irregular or varying rates, caused by the differences in the resistance and hardness of surface materials; softer and weaker rocks are rapidly worn away, whereas harder and more resistant rocks remain to form ridges, hills, or mountains. Differential erosion, along with the tectonic setting, are two of the most important controls on the evolution of continental landscapes on Earth.The feedback of erosion on tectonics is given by the transportation of surface, or near-surface, mass (rock, soil, sand, regolith, etc.) to a new location. This redistribution of material can have profound effects on the state of gravitational stresses in the area, dependent on the magnitude of mass transported. Because tectonic processes are highly dependent on the current state of gravitational stresses, redistribution of surface material can lead to tectonic activity. While erosion in all of its forms, by definition, wears away material from the Earth's surface, the process of mass wasting as a product of deep fluvial incision has the highest tectonic implications.
Erosional processes:
Mass wasting Mass wasting is the geomorphic process by which surface material move downslope typically as a mass, largely under the force of gravity As rivers flow down steeply sloping mountains, deep channel incision occurs as the river's flow wears away the underlying rock. Large channel incision progressively decreases the amount of gravitational force needed for a slope failure event to occur, eventually resulting in mass wasting. Removal of large amounts of surface mass in this fashion will induce an isostatic response resulting in uplift until equilibrium is reached.
Erosional processes:
Effects on structural evolution Recent studies have shown that erosional and tectonic processes have an effect on the structural evolution of some geologic features, most notably orogenic wedges. Highly useful sand box models, in which horizontal layers of sand are slowly pressed against a backstop, have shown that the geometries, structures, and kinematics of orogenic wedge formation with and without erosion and sedimentation are significantly different. Numerical models also show that the evolution of orogens, their final tectonic structure, and the potential development of a high plateau, all are sensitive to the long term climate over the mountains, for example, the concentration of precipitation in one side of the orogen due to orographic lift under a dominant wind direction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hypercubic honeycomb**
Hypercubic honeycomb:
In geometry, a hypercubic honeycomb is a family of regular honeycombs (tessellations) in n-dimensional spaces with the Schläfli symbols {4,3...3,4} and containing the symmetry of Coxeter group Rn (or B~n–1) for n ≥ 3.
The tessellation is constructed from 4 n-hypercubes per ridge. The vertex figure is a cross-polytope {3...3,4}.
The hypercubic honeycombs are self-dual.
Coxeter named this family as δn+1 for an n-dimensional honeycomb.
Wythoff construction classes by dimension:
A Wythoff construction is a method for constructing a uniform polyhedron or plane tiling.
The two general forms of the hypercube honeycombs are the regular form with identical hypercubic facets and one semiregular, with alternating hypercube facets, like a checkerboard.
A third form is generated by an expansion operation applied to the regular form, creating facets in place of all lower-dimensional elements. For example, an expanded cubic honeycomb has cubic cells centered on the original cubes, on the original faces, on the original edges, on the original vertices, creating 4 colors of cells around in vertex in 1:3:3:1 counts.
The orthotopic honeycombs are a family topologically equivalent to the cubic honeycombs but with lower symmetry, in which each of the three axial directions may have different edge lengths. The facets are hyperrectangles, also called orthotopes; in 2 and 3 dimensions the orthotopes are rectangles and cuboids respectively. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Preventer**
Preventer:
A gybe preventer, preventer, or jibe-guard, is a mechanical device on a sailing vessel which limits the boom's ability to swing unexpectedly across the boat due to an unplanned accidental jibe.During an unplanned accidental jibe (or gybe), neither the crew nor the boat is set up properly to execute a planned jibe. As a result, the uncontrolled boom will swing across the boat potentially inflicting injury or knocking crew members overboard. The mainsheet or traveller can also inflict serious injury. Uncontrolled jibes may also damage the boat itself.
Preventer:
Rigging a preventer on a yacht's mainsail is often performed when the wind is behind the beam (i.e. when it's coming from more than 90° off the bow). It can also be useful at other times when there is more swell than wind, a situation when the wind may not have the strength to keep the boom in place as the boat dips and rolls.
Preventer:
On any boat that is sailing downwind without a preventer, strict 'heads-down' procedures must be enforced anywhere within the boom's arc. Certain areas of the side-decks and maybe the cockpit also have to be strictly 'no-go' to all crew depending on what the boom and mainsheet could do in unchecked full swing.
The preventer with the most mechanical advantage is a line, from the end of the boom, led outside the shrouds and a long way forward - perhaps right up to the bow - through a block, back to the cockpit and secured within reach of the mainsheet.
Preventer:
Many cruising sailors prefer to rig two tackles (port and starboard) that run from the midpoint of the boom to blocks on a track such as the headsail-sheet-block track. These tackles are typically a 2 - 4 part tackles for greater purchase. This rig can also be used as a boom vang without taking up space under the mast that may be essential to the cruising sailor for dinghy stowage and other uses.
Preventer:
There is a possibility of breaking the main boom with a preventer rig such as this, but many modern yachts are considered to have short enough booms and be beamy enough to overlook this possibility in normal use. For example, while running with the preventer cleated, a large swell could roll the boat, dipping the boom end into the water, snapping the boom in half.
Preventer:
Care should be taken when selecting the rope which is used for preventer lines.
To reduce the shock loads on the tackles, for example in an unexpected jibe, three-strand nylon line may be preferred over braided cored line.
Boom Brake:
Another form of preventer is the boom brake, which, when sailing downwind, can also be used to jibe the mainsail in a slow, measured action. The brake usually rides on a line running perpendicular to, and below the boom. When the boom brake is actuated, friction on the line either works as a preventer (stops the boom from moving in the direction that would slacken the main sheet), or slows the boom’s speed while jibing. The brake is actuated by either tensioning the line upon which it rides or by using a second line to adjust the brake itself.
Jibing:
When jibing a fully loaded mainsail in a following sea, the following procedure may be used. Using the steering, the stern of the boat is carefully brought up into the wind. Then the leeward, working preventer is released little by little, while the mainsheet is shortened to bring in the boom. It is important to maintain at least a turn or two around the preventer's cleat the whole time ready to catch an early jibe during this stage of the manoeuvre. The mainsheet should pull the preventer around its cleat, without it being offered any slack. All the while it is also necessary to take in slack on the lazy preventer to keep it under control (i.e. prevent it getting tangled around something) until it is needed.
Jibing:
When the boom is as near as possible to midships (near to running fore-and-aft along the boat's centreline), the working preventer is slackened, the lazy one tightened, and the mainsheet made very secure. It is important that all crew are safe from where the boom may swing, and a call of "Jibe-ho" is a traditional last warning for this. At this point, a slight steering adjustment will actually jibe the sail. The course of the boat may slew further than expected, which can be ignored as it gives a shorthanded crew time to do the next three things: Run out the mainsheet as fast as possible without burning the hands, ensuring that the newly-lazy preventer runs free, then tighten in and secure the newly working preventer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tile Studio**
Tile Studio:
Tile Studio is a Windows-only editor for graphics and level data of tile-based video games. The application combines a bitmap editor for creating graphics and a map editor for designing level maps. A notable feature, distinguishing this tool from the approach of similar programs like Mappy and Tiled which define their own general map file format, is export of assets to arbitrary files through a comprehensive and sophisticated scripting language.
Tile Studio:
Tile Studio was created by Mike Wiering / Wiering Software.
Defining the Output Format:
Asset export scripts have a .TSD file extension and a line-oriented syntax. On the website, there are examples of .TSD files for use with several programming languages and libraries (C, Delphi, Java, BlitzBasic, etc.). The user is expected to write a specific .TSD file for each project.
Defining the Output Format:
The output consists of any number of text files, binary files, or images (.bmp or .png). For example, a tileset can be exported as a bitmap containing all the tiles (or only the tiles/tile combinations that are actually used in the maps), or in it can be exported pixel by pixel to a text file with RGB values.The following example creates a .bmp file with graphics and a map file in a custom text format. Notice the looping constructs and the placeholders, e.g. #tileset iterates over tilesets and populates TileSetIdentifier with the name of each tileset.
Defining the Output Format:
#tileset #tilebitmap tileset_<TileSetIdentifier>.bmp 320 #end tilebitmap #end tileset #file map_<ProjectName>.tsmap <TileSetCount> #tileset tileset_<TileSetIdentifier>.bmp <TileSetNumber>,<TileWidth>,<TileHeight>,<HorizontalTileCount>,<VerticalTileCount> <TileSetBitmapWidth>,<TileSetBitmapHeight>,<TransparentColorR>,<TransparentColorG>,<TransparentColorB> <MapCount> #map <MapNumber>,<MapWidth>,<MapHeight>,<ScrollX>,<ScrollY> #mapdata \n<TileNumber>,<Bounds>,<MapCode> #end mapdata #end map <SequenceCount> #sequence <SequenceNumber> <SequenceLength> #sequencedata \n<TileNumber> #end sequencedata #end sequence #end tileset #end file
License:
Tile Studio is free open source software under the Mozilla Public License (with the exception of the .tsd files and any code that is copied to the output, that is public domain). So Tile Studio can be used for projects that are under any license. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Faithful representation**
Faithful representation:
In mathematics, especially in an area of abstract algebra known as representation theory, a faithful representation ρ of a group G on a vector space V is a linear representation in which different elements g of G are represented by distinct linear mappings ρ(g).
In more abstract language, this means that the group homomorphism ρ:G→GL(V) is injective (or one-to-one).
Caveat:
While representations of G over a field K are de facto the same as K[G]-modules (with K[G] denoting the group algebra of the group G), a faithful representation of G is not necessarily a faithful module for the group algebra. In fact each faithful K[G]-module is a faithful representation of G, but the converse does not hold. Consider for example the natural representation of the symmetric group Sn in n dimensions by permutation matrices, which is certainly faithful. Here the order of the group is n! while the n × n matrices form a vector space of dimension n2. As soon as n is at least 4, dimension counting means that some linear dependence must occur between permutation matrices (since 24 > 16); this relation means that the module for the group algebra is not faithful.
Properties:
A representation V of a finite group G over an algebraically closed field K of characteristic zero is faithful (as a representation) if and only if every irreducible representation of G occurs as a subrepresentation of SnV (the n-th symmetric power of the representation V) for a sufficiently high n. Also, V is faithful (as a representation) if and only if every irreducible representation of G occurs as a subrepresentation of times (the n-th tensor power of the representation V) for a sufficiently high n. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hiring hall**
Hiring hall:
In organized labor, a hiring hall is an organization, usually under the auspices of a labor union, which has the responsibility of furnishing new recruits for employers who have a collective bargaining agreement with the union. It may also refer to a union hall, or the office from which the union may conduct its activities.
Hiring hall:
A hiring hall in the Construction trades does not offer "new recruits for Employers". A Hiring Hall is more fairly described as a hub where experienced Employees or workers move between Employers. Whether they quit, were laid off or fired, they are seeking work from contractors in need of Employees. It is called a Hiring Hall for a reason and not a Referral Hall. The employer's use of the hiring hall may be voluntary, or it may be compulsory by the terms of the employer's contract with the union (or, in a few cases, the labor laws of the jurisdiction in question). Compulsory use of a hiring hall effectively turns employers into a closed shop because employees must join the union before they can be hired. This is the primary argument against the practice, since it disallows non-union workers to gain employment.
Hiring hall:
Arguments in favor of the institution include that the presence of a hiring hall places the responsibility on the union to ensure that its members are suitably qualified and responsible individuals before assigning them to an employer. The union will often enforce a basic code of conduct among its members to ensure smooth operation of the hiring hall (to prevent members from double-booking, for example). If a hiring hall is reputable, the relationship between the union and the employer can be relatively harmonious. There are arguments that this actually benefits contractors who hire employees for the duration of a specific job. This is primarily due to the union handling qualifications and other eligibility requirements. Additionally, the union will also maintain employment records on the individual, meaning that behavior issues from other employers can be documented and reacted to. Thus there is a strong incentive to maintain good conduct to keep union membership. Workers benefit from having a more stable source of benefits such as insurance and pension plans. Contractors are still responsible for paying into these plans, but union members are more protected from lapses in coverage.
Hiring hall:
The prevalence of compulsory hiring hall arrangements in Canada varies from trade to trade and from province to province, since labor law there is under provincial jurisdiction. The situation in Europe also varies from country to country.
Hiring halls are generally most prevalent in skilled trades and where employers need to find qualified recruits on short notice. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Concussion grading systems**
Concussion grading systems:
Concussion grading systems are sets of criteria used in sports medicine to determine the severity, or grade, of a concussion, the mildest form of traumatic brain injury. At least 16 such systems exist, and there is little agreement among professionals about which is the best to use. Several of the systems use loss of consciousness and amnesia as the primary determinants of the severity of the concussion.The systems are widely used to determine when it is safe to allow an athlete to return to competition. Concern exists that multiple concussions received in a short time may present an added danger, since an initial concussion may leave the brain in a vulnerable state for a time. Injured athletes are prohibited from returning to play before they are symptom-free during rest and exertion and their neuropsychological tests are normal again, in order to avoid a risk of cumulative effects such as decline in mental function and second-impact syndrome, which may occur on very rare occasions after a concussion that occurs before the symptoms from another concussion have resolved.
Concussion grading systems:
It is estimated that over 40% of high school athletes return to action prematurely and over 40,000 youth concussions occur annually. Concussions account for nearly 10% of sport injuries, and are the second leading cause of brain injury for young people ages 15–24.Three grading systems are followed most widely: the first by neurosurgeon Robert Cantu, another by the Colorado Medical Society, and a third by the American Academy of Neurology. The Cantu system has become somewhat outdated.
Concussion grading systems:
Grade I Grade one concussions come with no loss of consciousness and less than 30 minutes of post-traumatic amnesia.
Grade II Grace two concussion patients lose consciousness for less than five minutes or have amnesia for between 30 minutes and 24 hours.
Grade III People with grade three concussions have a loss of consciousness lasting longer than five minutes or amnesia lasts for 24 hours.
Concussion grading systems:
Originally developed by Teasdale and Jennett (1974), the Glasgow Coma Scale (GCS) (see Table C-1) is a scoring scale for eye opening, motor, and verbal responses that can be administered to athletes on the field to objectively measure their level of consciousness. A score is assigned to each response type for a combined total score of 3 to 15 (with 15 being normal). An initial score of less than 5 is associated with an 80 percent chance of a lasting vegetative state or death. An initial score of greater than 11 is associated with a 90 percent chance of complete recovery (Teasdale and Jennett, 1974). Because most concussed individuals score 14 or 15 on the 15-point scale, its primary use in evaluating individuals for sports-related concussions is to rule out more severe brain injury and to help determine which athletes need immediate medical attention (Dziemianowicz et al., 2012).
American Academy of Neurology guidelines:
The guidelines devised in 1997 by the American Academy of Neurology (AAN) were based on those formulated by the Colorado Medical Society., however, in 2013 the AAN published a revised set of guidelines that moved away from concussion grading, emphasizing more detailed neurological assessment prior to return to play. The guidelines emphasized that younger patients should be managed more conservatively and that risk of recurrent concussion was highest within 10 days following the initial injury. Risk of concussion was also stratified by sport, training time, and player Body Mass Index.
American Academy of Neurology guidelines:
The guideline also called into question the existence of the "second impact syndrome", proposing instead that athletes with a previous concussion may be more vulnerable to severe injury due to decreased reaction time and coordination, symptoms of the initial injury.
Colorado Medical Society guidelines:
The Colorado Medical Society guidelines were published in 1991 in response to the death of a high school athlete due to what was thought to be second-impact syndrome. According to the guidelines, a grade I concussion consists of confusion only, grade II includes confusion and post-traumatic amnesia, and grade III and IV involve a loss of consciousness.By these guidelines, an athlete who has suffered a concussion may return to sports after having been free of symptoms, both at rest and during exercise, as shown in the following table: | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hydrogen energy vision and technology roadmap**
Hydrogen energy vision and technology roadmap:
The Hydrogen energy vision and technology roadmap is the roadmap of China initiated by the Ministry of Science and Technology, it makes hydrogen and fuel cell technologies important thematic priorities of the S&T development plan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Perfluorosulfonic acids**
Perfluorosulfonic acids:
Perfluorosulfonic acids (PFSAs) are chemical compounds of the formula CnF(2n+1)SO3H and thus belong to the family of perfluorinated and polyfluorinated alkyl compounds (PFASs). The simplest example of a perfluorosulfonic acid is the trifluoromethanesulfonic acid. Perfluorosulfonic acids with six or more perfluorinated carbon atoms, i.e. from perfluorohexanesulfonic acid onwards, are referred to as long-chain.
Properties:
Perfluorosulfonic acids are organofluoroanalogues of conventional alkanesulfonic acids, but they are several pKA units stronger (and are therefore strong acids). Their perfluoroalkyl chain has a highly hydrophobic character.
Use:
Perfluorooctanesulfonic acid, for example, has been used in hard chromium plating.
Regulation:
Perfluorooctanesulfonic acid was included in Annex B of the Stockholm Convention in 2009 and subsequently in the EU POPs Regulation.Perfluorohexanesulfonic acid including its salts and related compounds was proposed for inclusion in the Stockholm Convention.
Literature:
OECD, ed. (2022), "3. Perfluoroalkane sulfonic (a) and sulfinic (b) acids", Fact Cards of Major Groups of Per- and Polyfluoroalkyl Substances (PFASs), OECD Environment, Health and Safety Publications 68 Series on Risk Management, pp. 31–41 | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BBC Lab UK**
BBC Lab UK:
BBC Lab UK was a BBC website that allowed the public to take part in online experiments by completing tests and surveys. The website was active for four years until its data collection ceased in May 2013. Details of the experiments and projects have now been archived.Lab UK was commissioned in 2008 by BBC Commissioner Lisa Sargood, inspired by other online ‘citizen science’ projects such as Galaxy Zoo, the BBC Climate Change Experiment and BugGuide. The intention was to enable leading academics to harness the BBC's audience, using mass public participation to explore scientific hypotheses with very large data sets. The results would be published in academic journals and made available to the public through the BBC website and television.Lab UK was conceived by BBC executive producer Richard Cable, who also edited it from 2008-2011. A number of professional scientists were engaged to consult on the design and development of the website, as well as the design of individual experiments which the public would engage with.
BBC Lab UK:
Each web experiment was structured to give feedback on the activity of the participant, immediately after they had submitted their data. Collectively, the experiment data would be handed over securely to the scientist who had designed the experiment. The analysis of the experiment data would be conducted by the scientist's research team. Where possible, the BBC actively encouraged the publication of the results in peer-reviewed journals.The first experiment was published in 2009 and the final experiment was launched in 2012. The website stopped collecting data in May 2013 after its migration to the Knowledge & Learning product. The website was formally archived in March 2016.
History:
The BBC's iF&L department had published several online quizzes to accompany BBC science television programming. Scientists such as Dr. Val Curtis and Dr. Stian Reimers asked whether they might analyse the anonymous data generated by the completion of these online quizzes. In ‘The Disgust Test’ and ‘Sex ID’, specific hypotheses were tested in the online experiments. The results were published in specialist journals. BBC's Multiplatform commissioners decided to make a re-usable experiment publication platform that could save all data to a common database.
Experiments:
The Big Stress Test The Big Stress Test was Lab UK's beta launch candidate in May 2009. The experiment was designed by Professor Peter Kinderman from the University of Liverpool and Dr Sara Tai from the University of Manchester. The experiment was promoted via the BBC's mental health and wellbeing website ‘Headroom’. This experiment received roughly 6,000 participants. The experiment was updated and re-launched in 2011.
Experiments:
Brain Test Britain A collaboration with the Medical Research Council, Cambridge University, King's College London, the Alzheimer's Society and the BBC One Television programme, Bang Goes The Theory - this longitudinal experiment, launched in September 2009, sought to discover whether brain training games made any improvement to IQ for a healthy, general population. The Lab UK website hosted a series of brain training games and the study was structured as a clinical randomised controlled trial. The study contained over 60,000 people for six weeks. The study designers, Dr Adrian Owen and Professor Clive Ballard analysed the data and concluded the games made no difference to a healthy adult population. The results were published in Nature and televised on a BBC One special.
Experiments:
The Big Personality Test Launched on BBC One's One Show in November 2009, this cross-sectional survey was developed in partnership with University of Cambridge. It used the Big Five personality measure and a range of other well-used psychometric and health measures to determine the correlations between personality and life outcomes. The results informed the returning series ‘Child of Our Time’. The online website featured interactive video presented by Professor Robert Winston which presented your personal results of the personality test. The experiment received over 100,000 participants in the first two days and has received more than 750,000 participants in total.
Experiments:
The Web Behaviour Test This experiment was launched in conjunction with the BBC Two programme ‘The Virtual Revolution’ in February 2010.
How Musical Are You? Developed by researchers at Goldsmiths College, University of London, this experiment explored the relationship between potentially untapped musical ability and musical sophistication. The test was promoted in conjunction with BBC Radio 3’s ‘Genius of Mozart' season in January 2011.
Experiments:
The Great British Class Survey This was the first social science experiment to be co-commissioned by the BBC’s Current Affairs department. Professor Mike Savage and Professor Fiona Devine designed a survey to measure different sociological capitals. The stated ambition was to create a new snapshot of British society and develop some newer, more relevant class labels for the 21st century. The BBC later commissioned a face-to-face recruited survey of roughly 1000 people to overcome the intrinsic class bias of the BBC audience. The survey was promoted on BBC One’s One Show by Professor Mike Savage in January 2011. (also see 'The Great British Class Survey') The Big Money Test Collaborating with BBC One’s consumer programme Watchdog, this experiment aimed to discover the psychological traits that led to money problems in a large general population. The test was designed by Mark Fenton-O'Creevy from the Open University and Adrian Furnham, University College London. It was launched in April 2011 by Watchdog’s finance presenter, Martin Lewis, who featured in the interactive video feedback.
Experiments:
The Big Risk Test This experiment was developed by Professor David Spiegelhalter and Dr. Mike Aitken of Cambridge University. It tested a hypothesis of linkage between numeracy and judgement of risk. The experiment contained various verified measures and a selection of interactive puzzles which tested various aspects of risk judgement. The experiment was promoted on the BBC One science TV programme, “Bang Goes the Theory” in April 2011.
Experiments:
The BBC Stress Test This updated version of the earlier experiment presented more comprehensive feedback and was re-launched by Claudia Hammond on BBC Radio 4’s ‘All in the Mind’ in June 2011.
The Get Yourself Hired Test This experiment into job seekers’ psychological skills accompanied BBC Three’s ‘Up For Hire’ programme. It was promoted in October 2011.
Experiments:
Test Your Morality This experiment was developed by researchers at the London School of Hygiene and Tropical Medicine. Building on from ‘The Disgust Test’, this test aimed to test a Human Superorganism hypothesis, and an evolutionary theory of human morals. The survey contained detailed demographic information and 33 ‘vignettes’ which attempted to test people's responses to immoral behaviour across different moral domains. The test was promoted in November 2011.
Experiments:
Can You Compete Under Pressure? This experiment was developed by researchers from Sheffield and Wolverhampton Universities. The experiment aimed to test how effective 4 different sports psychology techniques were compared to a control in improving performance at a simple number grid task. The test provided randomised psychological interventions to participants via video clips from Olympic sprinter, Michael Johnson. Participants played the 'Grid' game four times and their performance improvement (or not) was measured. Part of the interactive video feedback by Johnson was the first item to be filmed at Lund Point, the decommissioned block of flats in Stratford that was to become BBC TV's Olympic HQ. The experiment was launched by Michael Johnson on BBC One's The One show in May 2012.
Results:
The Stress Test The original data was combined with the data from the re-launched experiment. The findings were significant and were publicised both in peer-reviewed journals and on BBC News and Radio platforms. An All in the Mind special edition detailed some of the ‘thinking styles’ which can lead to depression. More than 30,000 cases of data were analysed to find these results.
Results:
Brain Test Britain The data from the longitudinal study was analysed and the results were featured in Nature - ‘Putting brain training to the test’. The results were publicised on BBC One's Bang Goes the Theory Television programme. The results have caused some controversy, as some researchers said the study ignored the training effects on older, less healthy subjects and that the study instruments weren't sufficiently similar to commercially available products.
Results:
The Big Personality Test Preliminary results were extracted from the initial data, which contained more than 100,000 cases. These were used to inform the subject matter of the BBC One TV programme, ‘Child of Our Time’ which aired in May 2010. Over 500,000 cases were recorded in the database by May 2013. The first peer-reviewed paper published based on these data, concerned the psychological aspects of childhood sexual abuse survivors.
Results:
A study has been published examining the geographical associations between personality and life satisfaction using over 50,000 cases from residents of London.
Results:
A further study, mapping the regional differences in personality in Great Britain was published in PLoS One. These two studies were the basis for the BBC iWonder's interactive guide "Where in Britain would you be happiest?" The Web Behaviour Test Over 50,000 people completed the survey. The results were written up in the Journal of Information Management in 2011. The authors believed they detected differences between 'generations' in their information seeking strategies with younger people apparently possessing "poorer working memories and are less competent at multi‐tasking" The Big Money Test Over 100,000 people completed the survey. Results were written up in a series of papers. The authors found money attitudes (money as power, security, generosity or autonomy) and financial capabilities (making ends meet, keeping track, planning ahead, and staying informed) to be significant predictors of experiencing adverse financial events ranging from denial of credit to bankruptcy). The test found clear linkages between impulse buying behaviour and serious financial problems and found impulse buying behaviour to be predicted by difficulties in emotion regulation. Combining data with the Big Personality Test they also found impulse buying behaviour to be predicted by personality and money attitudes.
Results:
How Musical Are You? Over 150,00 people completed the test. The researchers’ findings were published in PLoS One in February 2014.
Results:
The Great British Class Survey Over 150,000 cases of data plus the recruited GfK survey led to a paper in Sociology. This was synchronised with the publication in March 2013 in the BBC News website of the Great British Class Calculator. The calculator is a web application that asks seven questions from the original survey, and categorises each person into the newly found class categories, depending on their results. The calculator was extremely popular with it being used over 6.5 million times in a week. However it was misunderstood in some quarters with some people thinking it generated the results itself, rather than analysis of the original survey data. The Class Calculator spawned spoofs in popular media but also serious criticism in social science circles. The paper published by Sage Journals is the most downloaded Sociology journal paper of all time with over 23,000 downloads. The paper revealed a new class system for Britain with seven classes which described the changing stratification of modern Britain and the reduction of size of traditional class segments like the working class and the traditional middle class.
Results:
Can You Compete Under Pressure? Over 110,000 people took part, with a sample of 44,000 analysed. The results were written up in the Frontiers of Psychology journal and found that brief online psychological training could be effect in improving performance, especially the use of the 'self-talk' technique. Other findings noted the importance of emotional control, and the general improvement of the control group throughout the trials demonstrating the value of practice and Michael Johnson's control interventions.
Technical challenges:
The BBC Lab UK platform was built using the BBC's new Forge application server, and was interconnected with other BBC online services such as BBC ID, BBC EMP, MemCache amongst others. BBC ID was employed as a sign-ed service was needed to help keep participant's data secure, and to prevent malicious submissions. The BBC Lab UK platform was re-built three times. First in 2010 to improve scalability. Second, in 2011 to reduce load times and server load. A third time in 2012 to re-factor after substantial changes in the BBC online infrastructure.
De-commission:
After BBC Multiplatform's dissolution in 2011, BBC Lab UK was one of a number of web products migrated to the new Knowledge & Learning product. Senior management decided in 2013 that all technical effort would now be spent building this product rather than supporting the Lab UK service. From 1 May 2013, the website stopped collecting experiment data, although most of the experiments still offer feedback to those who have already completed the tests. The website was permanently 'mothballed' on the 18th of March 2016.
Data deposition:
The data from the Great British Class Survey has been deposited in the UK Data Archive (University of Essex) for use by other academic researchers. Data from 'The Big Personality Test' and 'The Big Money Test' have also been deposited.
Legacy:
Learnings from the Lab UK project were used during the development of the Open University's citizen science platform nQuire, which was built in partnership with the BBC. Since its launch in 2018, nQuire has collaborated with the BBC on a number of citizen science projects including The Feel Good Test, The 2019 Gardenwatch Surveys, and a series of surveys about literature inspired by The Novels That Shaped Our World. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Resonance frequency analysis**
Resonance frequency analysis:
Resonance frequency analysis (RFA) is a method used to determine stability (the level of osseointegration) in dental implants. The stability is presented as an implant stability quotient (ISQ) value. The higher the ISQ value the higher the stability.
Utilizing RFA involves sending magnetic pulses to a small metal rod temporarily attached to the implant. As the rod vibrates, the probe reads its resonance frequency and translates it into an ISQ value.
RFA measurements are used to assess the stability of the implant immediately after placement, as well as to measure the stability during the healing time. This helps the dentist determine if further healing time (osseointegration) is needed before the prosthetic tooth is attached, as well as to identify at-risk patients with compromised bone tissue, or other risk factors.
History:
Resonance frequency analysis was first suggested as an alternative method of analyzing peri-implant bone in a scientific paper by Meredith N et al in 1996. As stated in the paper’s abstract, in measuring implant stability and osseointegration, “radiographs are of value, but a standardised technique is necessary to ensure repeatability.” The new technique tested involved connecting a small transducer (aluminum rod) to implants. Measurements showed that the resonance frequency increased in direct relation to the increase in the stiffness of the bone-implant interface, thus demonstrating a repeatable and quantitative method of assessing stability.
History:
The method underwent further research and in 1999 the Sweden-based company Osstell AB formed to commercialize the new technique. Osstell developed a device that transmitted vibrational frequencies to a metal peg inserted in the implant and measured the frequency at which resonance was reached. Whereas Meredith et al. measured in the range of 3500–8500 kHz, Osstell developed the Implant Stability Quotient (ISQ) that translated this kHz range to a score 1–100.Resonance frequency analysis has considerable scientific interest since its advent, largely owing to the increasing number of patients demanding dental implants as the technology improves. As it is a non-invasive and objective way to evaluate short- and long-term implant viability, RFA is an increasingly utilized method.
Scientific foundations:
The method that preceded RFA, percussion or “tapping,” may be used to understand the underlying functionality of RFA devices as the same principles are at work. When an implant was percussed with a blunt instrument, the nature of the sound elicited would qualitatively indicate the level of the implant’s stability. A low pitched, dull sound (low frequency) indicated a loose bond with the bone, as the vibrations moved slower across the distance between the implant and surrounding tissue. A high pitched, crystalline sound indicated a tight connection along the implant-bone interface, with vibrations moving quicker across a more restricted area. The dentist would make a qualitative assessment of the level of stability based on the sound heard.With RFA, vibrations are being used to determine stability, but on a micro scale and in a non-invasive manner. A metal peg (transducer) with a magnet top is attached to the implant. Magnetic pulses (alternating sine waves of uniform amplitude) cause the peg to vibrate, increasing steadily in pitch until the implant resonates. The higher the resonant frequency, the more stable the implant.
Measurement:
The frequency readings, translated to an Implant Stability Quotient (ISQ), are used as an assessment and ongoing monitoring tool. Medical interpretations of ISQ values may then be used to inform treatment plans, as analyzed and documented in hundreds of clinical studies.A reading of 55 or below indicates that too much lateral movement is possible, and the implant needs to reach secondary stability (greater bonding with the bone) before the prosthesis may be attached. If the resonance frequency reading increases, it signals that osseointegration is occurring. Along with other diagnostic tools, measurements over time can be used to indicate the rate of osseointegration, and treatment plans may be assigned accordingly. If the rate is initially low and does not increase, it signals the implant is not viable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subtext (programming language)**
Subtext (programming language):
Subtext is a moderately visual programming language and environment, for writing application software. It is an experimental, research attempt to develop a new programming model, called Example Centric Programming, by treating copied blocks as first class prototypes, for program structure. It uses live text, similar to what occurs in spreadsheets as users update cells, for frequent feedback. It is intended to eventually be developed enough to become a practical language for daily use. It is planned to be open software; the license is not yet determined.
Subtext (programming language):
Subtext was created by Jonathan Edwards who submitted a paper on the language to OOPSLA. It was accepted as part of the 2005 conference.
Environment:
Early video previews of the Subtext environment were released circa 2006, which demonstrated the semantics of Subtext programs, and the close integration with the Subtex environment and runtime.
Environment:
Subtext programs are declared and manipulated (or mutated) by adding and linking elements of various types to a syntax tree, and entering in values or names as necessary, as opposed to typing out textual programs. Due to the design of the Subtext language and environment, there is no distinction between a program's representation and its execution. Like spreadsheets, Subtext programs are live executions within an environment and runtime, and programming is direct manipulation of these executions via a graphical environment. Unlike typical functional programming languages, Subtext has simple semantics and is easily applicable to reactive systems that require mutable state, I/O, and concurrency, under a model known as "Reactive Programming". Console input ("invocations") can be utilized via data flow within a Subtext program, allowing users to manipulate values interactively.
Coherence:
A continuation and subset of the Subtext language using other principles, is Coherence, an experimental programming language and environment, which uses a new model of change-driven computation called "Coherent reaction", to coordinate the effects and side-effects of programs interactively as they are being developed. The language is specialized for interactive application software, and is being designed by the creator of Subtext, Jonathan Edwards, who reports upon its development by publishing white papers.
Coherence:
Side effects are both the essence and bane of imperative programming. The programmer must carefully coordinate actions to manage their side effects upon each other. Such coordination is complex, error-prone, and fragile. Coherent reaction is a new model of change-driven computation that coordinates effects automatically. Automatically coordinating actions lets the programmer express what to do, not when to do it.
Coherence:
State changes trigger events called reactions, that in turn change other states. A coherent execution order is one in which each reaction executes before any others that are affected by its changes. A coherent order is discovered iteratively by detecting incoherencies as they occur and backtracking their effects. The fundamental building block of Coherence is the dynamically typed mutable tree. The fundamental abstraction mechanism is the virtual tree, whose value is lazily computed, and whose behavior is generated by coherent reactions. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Parasitoid**
Parasitoid:
In evolutionary ecology, a parasitoid is an organism that lives in close association with its host at the host's expense, eventually resulting in the death of the host. Parasitoidism is one of six major evolutionary strategies within parasitism, distinguished by the fatal prognosis for the host, which makes the strategy close to predation.
Parasitoid:
Among parasitoids, strategies range from living inside the host (endoparasitism), allowing it to continue growing before emerging as an adult, to paralysing the host and living outside it (ectoparasitism). Hosts can include other parasitoids, resulting in hyperparasitism; in the case of oak galls, up to five levels of parasitism are possible. Some parasitoids influence their host's behaviour in ways that favour the propagation of the parasitoid.
Parasitoid:
Parasitoids are found in a variety of taxa across the insect superorder Endopterygota, whose complete metamorphosis may have pre-adapted them for a split lifestyle, with parasitoid larvae and free-living adults. Most are in the Hymenoptera, where the ichneumons and many other parasitoid wasps are highly specialised for a parasitoidal way of life. There are parasitoids, too, in the Diptera, Coleoptera and other orders of endopterygote insects. Some of these, usually but not only wasps, are used in biological pest control. The 17th century zoological artist Maria Sibylla Merian closely observed parasitoids and their hosts in her paintings. The biology of parasitoidism influenced Charles Darwin's beliefs, and has inspired science fiction authors and scriptwriters to create numerous parasitoidal aliens that kill their human hosts, such as the alien species in Ridley Scott's 1979 film Alien.
Etymology:
The term "parasitoid" was coined in 1913 by the Swedo-Finnish writer Odo Reuter, and adopted in English by his reviewer, the entomologist William Morton Wheeler. Reuter used it to describe the strategy where the parasite develops in or on the body of a single host individual, eventually killing that host, while the adult is free-living. Since that time, the concept has been generalised and widely applied.
Strategies:
Evolutionary options A perspective on the evolutionary options can be gained by considering four questions: the effect on the reproductive fitness of a parasite's hosts; the number of hosts they have per life stage; whether the host is prevented from reproducing; and whether the effect depends on intensity (number of parasites per host). From this analysis, proposed by K. D. Lafferty and A. M. Kunis, the major evolutionary strategies of parasitism emerge, alongside predation.
Strategies:
Parasitoidism, in the view of R. Poulin and H. S. Randhawa, is one of six main evolutionary strategies within parasitism, the others being parasitic castrator, directly transmitted parasite, trophically transmitted parasite, vector-transmitted parasite, and micropredator. These are adaptive peaks, with many possible intermediate strategies, but organisms in many different groups have consistently converged on these six.Parasitoids feed on a living host which they eventually kill, typically before it can produce offspring, whereas conventional parasites usually do not kill their hosts, and predators typically kill their prey immediately.
Strategies:
Basic concepts Parasitoids can be classified as either endo- or ectoparasitoids with idiobiont or koinobiont developmental strategies. Endoparasitoids live within their host's body, while ectoparasitoids feed on the host from outside. Idiobiont parasitoids prevent further development of the host after initially immobilizing it, whereas koinobiont parasitoids allow the host to continue its development while feeding upon it. Most ectoparasitoids are idiobiont, as the host could damage or dislodge the external parasitoid if allowed to move and moult. Most endoparasitoids are koinobionts, giving them the advantage of a host that continues to grow larger and avoid predators.Primary parasitoids have the simplest parasitic relationship, involving two organisms, the host and the parasitoid. Hyperparasitoids are parasitoids of parasitoids; secondary parasitoids have a primary parasitoid as their host, so there are three organisms involved. Hyperparasitoids are either facultative (can be a primary parasitoid or a hyperparasitoid depending on the situation) or obligate (always develop as a hyperparasitoid). Levels of parasitoids beyond secondary also occur, especially among facultative parasitoids. In oak gall systems, there can be up to five levels of parasitism. Cases in which two or more species of parasitoids simultaneously attack the same host without parasitizing each other are called multi- or multiple parasitism. In many cases, multiple parasitism still leads to the death of one or more of the parasitoids involved. If multiple parasitoids of the same species coexist in a single host, it is called superparasitism. Gregarious species lay multiple eggs or polyembryonic eggs which lead to multiple larvae in a single host. The end result of gregarious superparasitism can be a single surviving parasitoid individual or multiple surviving individuals, depending on the species. If superparasitism occurs accidentally in normally solitary species the larvae often fight among themselves until only one is left.
Strategies:
Influencing host behaviour In another strategy, some parasitoids influence the host's behaviour in ways that favour the propagation of the parasitoid, often at the cost of the host's life. A spectacular example is the lancet liver fluke, which causes host ants to die clinging to grass stalks, where grazers or birds may be expected to eat them and complete the parasitoidal fluke's life cycle in its definitive host. Similarly, as strepsipteran parasitoids of ants mature, they cause the hosts to climb high on grass stalks, positions that are risky, but favour the emergence of the strepsipterans. Among pathogens of mammals, the rabies virus affects the host's central nervous system, eventually killing it, but perhaps helping to disseminate the virus by modifying the host's behaviour. Among the parasitic wasps, Glyptapanteles modifies the behaviour of its host caterpillar to defend the pupae of the wasps after they emerge from the caterpillar's body. The phorid fly Apocephalus borealis oviposits into the abdomen of its hosts, including honey bees, causing them to abandon their nest, flying from it at night and soon dying, allowing the next generation of flies to emerge outside the hive.
Taxonomic range:
About 10% of described insects are parasitoids, in the orders Hymenoptera, Diptera, Coleoptera, Neuroptera, Lepidoptera, Strepsiptera, and Trichoptera. The majority are wasps within the Hymenoptera; most of the others are Dipteran flies. Parasitoidism has evolved independently many times: once each in Hymenoptera, Strepsiptera, Neuroptera, and Trichoptera, twice in the Lepidoptera, 10 times or more in Coleoptera, and no less than 21 times among the Diptera. These are all holometabolous insects (Endopterygota, which form a single clade), and it is always the larvae that are parasitoidal. The metamorphosis from active larva to an adult with a different body structure permits the dual lifestyle of parasitic larva, freeliving adult in this group. These relationships are shown on the phylogenetic tree; groups containing parasitoids are shown in boldface, e.g. Coleoptera, with the number of times parasitoidism evolved in the group in parentheses, e.g. (10 clades). The approximate number (estimates can vary widely) of parasitoid species out of the total is shown in square brackets, e.g. [2,500 of 400,000].
Taxonomic range:
Hymenoptera Within the Hymenoptera, parasitoidism evolved just once, and the many described species of parasitoid wasps represent the great majority of species in the order, barring those like the ants, bees, and Vespidae wasps that have secondarily lost the parasitoid habit. The parasitoid wasps include some 25,000 Ichneumonoidea, 22,000 Chalcidoidea, 5,500 Vespoidea, 4,000 Platygastroidea, 3,000 Chrysidoidea, 2,300 Cynipoidea, and many smaller families. These often have remarkable life cycles.
Taxonomic range:
They can be classified as either endoparasitic or ectoparasitic according to where they lay their eggs. Endoparasitic wasps insert their eggs inside their host, usually as koinobionts, allowing the host to continue to grow (thus providing more food to the wasp larvae), moult, and evade predators. Ectoparasitic wasps deposit theirs outside the host's body, usually as idiobionts, immediately paralysing the host to prevent it from escaping or throwing off the parasite. They often carry the host to a nest where it will remain undisturbed for the wasp larva to feed on. Most species of wasps attack the eggs or larvae of their host, but some attack adults. Oviposition depends on finding the host and on evading host defenses; the ovipositor is a tube-like organ used to inject eggs into hosts, sometimes much longer than the wasp's body. Hosts such as ants often behave as if aware of the wasps' presence, making violent movements to prevent oviposition. Wasps may wait for the host to stop moving, and then attack suddenly.Parasitoid wasps face a range of obstacles to oviposition, including behavioural, morphological, physiological and immunological defenses of their hosts. To thwart this, some wasps inundate their host with their eggs so as to overload its immune system's ability to encapsulate foreign bodies; others introduce a virus which interferes with the host's immune system.
Taxonomic range:
Some parasitoid wasps locate hosts by detecting the chemicals that plants release to defend against insect herbivores.
Taxonomic range:
Other orders The true flies (Diptera) include several families of parasitoids, the largest of which is the Tachinidae (some 9,200 species), followed by the Bombyliidae (some 4,500 species), along with the Pipunculidae and the Conopidae, which includes parasitoidal genera such as Stylogaster. Other families of flies include some protelean species. Some Phoridae are parasitoids of ants. Some flesh flies are parasitoids: for instance Emblemasoma auditrix is parasitoidal on cicadas, locating its host by sound.The Strepsiptera (twisted-wing parasites) consist entirely of parasitoids; they usually sterilise their hosts.Two beetle families, Ripiphoridae (450 species) and Rhipiceridae, are largely parasitoids, as are Aleochara Staphylinidae; in all, some 400 staphylinids are parasitoidal. Some 1,600 species of the large and mainly freeliving family Carabidae are parasitoids.A few Neuroptera are parasitoidal; they have larvae that actively search for hosts. The larvae of some Mantispidae, subfamily Symphrasinae, are parasitoids of other arthropods including bees and wasps.Although nearly all Lepidoptera (butterflies and moths) are herbivorous, a few species are parasitic. The larvae of Epipyropidae feed on Homoptera such as leafhoppers and cicadas, and sometimes on other Lepidoptera. The larvae of Cyclotornidae parasitise first Homoptera and later ant brood. The pyralid moth Chalcoela has been used in biological control of the wasp Polistes in the Galapagos Islands.Parasitism is rare in the Trichoptera (caddisflies), but it is found among the Hydroptilidae (purse-case caddisflies), probably including all 10 species in the Orthotrichia aberrans group; they parasitise the pupae of other trichopterans.
Taxonomic range:
Entomopathogenic fungi All known fungi in the genera Cordyceps and Ophiocordyceps are endoparasitic. One of the most notable fungal parasitoids is O. unilateralis which infects carpenter ants by breaching the ant's exoskeletons via their spores and growing in the ant's hemocoel as free living yeast cells. Eventually the yeast cells progress to producing nerve toxins to alter the behavior of the ant causing it to climb and bite onto vegetation, known as the 'death bite'. This approach is so fine-tuned it causes the ant to bite down on the part of the leaf most optimal for the fungus to fruit; the adaxial leaf midrib. In fact, it has been found that in specific circumstances, the time of the death bite is synchronized to solar noon. As much as 40% of the ant's biomass is fungal hyphae at the moment of the death bite. After the ant dies, the fungus produces a large stalk, growing from the back of the ant's head which subsequently releases ascospores. These spores are too large to be wind dispersed and instead fall directly to the ground where they produce secondary spores that infect ants as they walk over them. O. sinesis, is a parasitoid as well, parasitizing ghost moth larvae, killing them within 15-25 days, a similar process to that of O. unilateralis.
Interactions with humans:
In biological pest control Parasitoids are among the most widely used biological control agents. Classic biological pest control using natural enemies of pests (parasitoids or predators) is extremely cost effective, the cost/benefit ratio for classic control being 1:250, but the technique is more variable in its effects than pesticides; it reduces rather than eliminates pests. The cost/benefit ratio for screening natural enemies is similarly far higher than for screening chemicals: 1:30 against 1:5 respectively, since the search for suitable natural enemies can be guided accurately with ecological knowledge. Natural enemies are more difficult to produce and to distribute than chemicals, as they have a shelf life of weeks at most; and they face a commercial obstacle, namely that they cannot be patented.
Interactions with humans:
From the point of view of the farmer or horticulturalist, the most important groups are the ichneumonid wasps, which prey mainly on caterpillars of butterflies and moths; braconid wasps, which attack caterpillars and a wide range of other insects including greenfly; chalcidoid wasps, which parasitise eggs and larvae of greenfly, whitefly, cabbage caterpillars, and scale insects; and tachinid flies, which parasitize a wide range of insects including caterpillars, adult and larval beetles, and true bugs. Commercially, there are two types of rearing systems: short-term seasonal daily output with high production of parasitoids per day, and long-term year-round low daily output with a range in production of 4–1000 million female parasitoids per week, to meet demand for suitable biological control agents for different crops.
Interactions with humans:
Maria Sibylla Merian Maria Sibylla Merian (1647–1717) was one of the first naturalists to study and depict parasitoids and their insect hosts in her closely-observed paintings.
Interactions with humans:
Charles Darwin Parasitoids influenced the religious thinking of Charles Darwin, who wrote in an 1860 letter to the American naturalist Asa Gray: "I cannot persuade myself that a beneficent and omnipotent God would have designedly created parasitic wasps with the express intention of their feeding within the living bodies of Caterpillars." The palaeontologist Donald Prothero notes that religiously minded people of the Victorian era, including Darwin, were horrified by this instance of evident cruelty in nature, particularly noticeable in the ichneumonid wasps.
Interactions with humans:
In science fiction Parasitoids have inspired science fiction authors and screenwriters to create terrifying parasitic alien species that kill their human hosts. One of the best-known is the Xenomorph in Ridley Scott's 1979 film Alien, which runs rapidly through its lifecycle from violently entering a human host's mouth to bursting fatally from the host's chest. The molecular biologist Alex Sercel, writing in Signal to Noise Magazine, compares "the biology of the [Alien] Xenomorphs to parasitoid wasps and nematomorph worms from Earth to illustrate how close to reality the biology of these aliens is and to discuss this exceptional instance of science inspiring artists". Sercel notes that the way the Xenomorph grasps a human's face to implant its embryo is comparable to the way a parasitoid wasp lays its eggs in a living host. He further compares the Xenomorph life cycle to that of the nematomorph Paragordius tricuspidatus which grows to fill its host's body cavity before bursting out and killing it. Alistair Dove, on the science website Deep Sea News, writes that there are multiple parallels with parasitoids, though there are in his view more disturbing life cycles in real biology. In his view, the parallels include the placing of an embryo in the host; its growth in the host; the resulting death of the host; and alternating generations, as in the Digenea (trematodes). The social anthropologist Marika Moisseeff argues that "The parasitical and swarming aspects of insect reproduction make these animals favored villains in Hollywood science fiction. The battle of culture against nature is depicted as an unending combat between humanity and insect-like extraterrestrial species that tend to parasitize human beings in order to reproduce." The Encyclopedia of Science Fiction lists many instances of "parasitism", often causing the host's death. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Rhenium trioxide**
Rhenium trioxide:
Rhenium trioxide or rhenium(VI) oxide is an inorganic compound with the formula ReO3. It is a red solid with a metallic lustre that resembles copper in appearance. It is the only stable trioxide of the Group 7 elements (Mn, Tc, Re).
Preparation and structure:
Rhenium trioxide can be formed by reducing rhenium(VII) oxide with carbon monoxide at 200 °C or elemental rhenium at 400 °C.
Re2O7 + CO → 2 ReO3 + CO2 3 Re2O7 + Re → 7 ReO3Re2O7 can also be reduced with dioxane.
Preparation and structure:
Rhenium trioxide crystallizes with a primitive cubic unit cell, with a lattice parameter of 3.742 Å (374.2 pm). The structure of ReO3 is similar to that of perovskite (ABO3), without the large A cation at the centre of the unit cell. Each rhenium center is surrounded by an octahedron defined by six oxygen centers. These octahedra share corners to form the 3-dimensional structure. The coordination number of O is 2, because each oxygen atom has 2 neighbouring Re atoms.
Properties:
Physical properties ReO3 is unusual for an oxide because it exhibits very low resistivity. It behaves like a metal in that its resistivity decreases as its temperature decreases. At 300 K, its resistivity is 100.0 nΩ·m, whereas at 100 K, this decreases to 6.0 nΩ·m, 17 times less than at 300 K.
Chemical properties Rhenium trioxide is insoluble in water, as well as dilute acids and bases. Heating it in base results in disproportionation to give ReO2 and ReO−4, while reaction with acid at high temperature affords Re2O7. In concentrated nitric acid, it yields perrhenic acid.
Upon heating to 400 °C under vacuum, it undergoes disproportionation: 3 ReO3 → Re2O7 + ReO2Rhenium trioxide can be chlorinated to give rhenium trioxide chloride: 2 ReO3 + Cl2 → 2 ReO3Cl
Uses:
Hydrogenation catalyst Rhenium trioxide finds some use in organic synthesis as a catalyst for amide reduction. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Membrane emulsification**
Membrane emulsification:
Membrane emulsification (ME) is a relatively novel technique for producing all types of single and multiple emulsions for DDS (drug delivery systems), solid micro carriers for encapsulation of drug or nutrient, solder particles for surface-mount technology, mono dispersed polymer microspheres (for analytical column packing, enzyme carriers, liquid crystal display spacers, toner core particles). Membrane emulsification was introduced by Nakashima and Shimizu in the late 1980s in Japan.
Description:
In this process, the dispersed phase is forced through the pores of a microporous membrane directly into the continuous phase. Emulsified droplets are formed and detached at the end of the pores with a drop-by-drop mechanism. The advantages of membrane emulsification over conventional emulsification processes are that it enables one to obtain very fine emulsions of controlled droplet sizes and narrow droplet size distributions. Successful emulsification can be carried out with much less consumption of emulsifier and energy, and because of the lowered shear stress effect, membrane emulsification allows the use of shear-sensitive ingredients, such as starch and proteins. The membrane emulsification process is generally carried out in cross-flow (continuous or batch) mode or in a stirred cell (batch).A major limiting factor of ME was the low dispersed phase flux. In order to expand the industrial applications, the productivity of this method had to be increased. Some research has been aimed at solving this problem and others, such as membrane fouling.High dispersed phase flux has now been shown to be possible using single-pass annular gap crossflow membranes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Flight recorder**
Flight recorder:
A flight recorder is an electronic recording device placed in an aircraft for the purpose of facilitating the investigation of aviation accidents and incidents. The device may often be referred to colloquially as a "black box", an outdated name which has become a misnomer—they are now required to be painted bright orange, to aid in their recovery after accidents.
Flight recorder:
There are two types of flight recording devices: the flight data recorder (FDR) preserves the recent history of the flight through the recording of dozens of parameters collected several times per second; the cockpit voice recorder (CVR) preserves the recent history of the sounds in the cockpit, including the conversation of the pilots. The two devices may be combined into a single unit. Together, the FDR and CVR objectively document the aircraft's flight history, which may assist in any later investigation.
Flight recorder:
The two flight recorders are required by international regulation, overseen by the International Civil Aviation Organization, to be capable of surviving the conditions likely to be encountered in a severe aircraft accident. For this reason, they are typically specified to withstand an impact of 3400 g and temperatures of over 1,000 °C (1,830 °F), as required by EUROCAE ED-112. They have been a mandatory requirement in commercial aircraft in the United States since 1967. After the unexplained disappearance of Malaysia Airlines Flight 370 in 2014, commentators have called for live streaming of data to the ground, as well as extending the battery life of the underwater locator beacons.
History:
Early designs One of the earliest and proven attempts was made by François Hussenot and Paul Beaudouin in 1939 at the Marignane flight test center, France, with their "type HB" flight recorder; they were essentially photograph-based flight recorders, because the record was made on a scrolling photographic film 8 metres (8.7 yd) long by 88 millimetres (3.5 in) wide. The latent image was made by a thin ray of light deviated by a mirror tilted according to the magnitude of the data to be recorded (altitude, speed, etc.). A pre-production run of 25 "HB" recorders was ordered in 1941 and HB recorders remained in use in French flight test centers well into the 1970s.In 1947, Hussenot founded the Société Française des Instruments de Mesure with Beaudouin and another associate, so as to market his invention, which was also known as the "hussenograph". This company went on to become a major supplier of data recorders, used not only aboard aircraft but also trains and other vehicles. SFIM is today part of the Safran group and is still present in the flight recorder market. The advantage of the film technology was that it could be easily developed afterwards and provides a durable, visual feedback of the flight parameters without needing any playback device. On the other hand, unlike magnetic tapes or later flash memory-based technology, a photographic film cannot be erased and reused, and so must be changed periodically. The technology was reserved for one-shot uses, mostly during planned test flights: it was not mounted aboard civilian aircraft during routine commercial flights. Also, cockpit conversation was not recorded.
History:
Another form of flight data recorder was developed in the UK during World War II. Len Harrison and Vic Husband developed a unit that could withstand a crash and fire to keep the flight data intact. The unit was the forerunner of today's recorders, in being able to withstand conditions that aircrew could not. It used copper foil as the recording medium, with various styli, corresponding to various instruments or aircraft controls, indenting the foil. The foil was periodically advanced at set time intervals, giving a history of the aircraft's instrument readings and control settings. The unit was developed at Farnborough for the Ministry of Aircraft Production. At the war's end the Ministry got Harrison and Husband to sign over their invention to it and the Ministry patented it under British patent 19330/45.
History:
The first modern flight data recorder, called "Mata Hari", was created in 1942 by Finnish aviation engineer Veijo Hietala. This black high-tech mechanical box was able to record all important details during test flights of fighter aircraft that the Finnish army repaired or built in its main aviation factory in Tampere, Finland.During World War II both British and American air forces successfully experimented with aircraft voice recorders. In August 1943 the USAAF conducted an experiment with a magnetic wire recorder to capture the inter-phone conversations of a B-17 bomber flight crew on a combat mission over Nazi-occupied France. The recording was broadcast back to the United States by radio two days afterwards.
History:
Australian designs In 1953, while working at the Aeronautical Research Laboratories (ARL) of the Defence Science and Technology Organisation, in Melbourne, Australian research scientist David Warren conceived a device that would record not only the instrument readings, but also the voices in the cockpit. In 1954 he published a report entitled "A Device for Assisting Investigation into Aircraft Accidents".Warren built a prototype FDR called "The ARL Flight Memory Unit" in 1956, and in 1958 he built the first combined FDR/CVR prototype. It was designed with civilian aircraft in mind, explicitly for post-crash examination purposes. Aviation authorities from around the world were largely uninterested at first, but this changed in 1958 when Sir Robert Hardingham, the secretary of the British Air Registration Board, visited the ARL and was introduced to David Warren. Hardingham realized the significance of the invention and arranged for Warren to demonstrate the prototype in the UK.The ARL assigned an engineering team to help Warren develop the prototype to the airborne stage. The team, consisting of electronics engineers Lane Sear, Wally Boswell and Ken Fraser, developed a working design that incorporated a fire-resistant and shockproof case, a reliable system for encoding and recording aircraft instrument readings and voice on one wire, and a ground-based decoding device. The ARL system, made by the British firm of S. Davall & Sons Ltd, in Middlesex, was named the "Red Egg" because of its shape and bright red color.The units were redesigned in 1965 and relocated at the rear of aircraft to increase the probability of successful data retrieval after a crash.Carriage of data recording equipment became mandatory in UK-registered aircraft in two phases, the first for new turbine-engined public transport category aircraft over 12,000 lb (5,400 kg) in weight was mandated in 1965, with a further requirement in 1966 for piston-engined transports over 60,000 lb (27,000 kg), with the earlier requirement further extended to all jet transports. One of the first UK uses of the data recovered from an aircraft accident was that recovered from the Royston "Midas" data recorder that was on board the British Midland Argonaut involved in the Stockport Air Disaster in 1967.
History:
US designs A flight recorder was invented and patented in the United States by Professor James J. "Crash" Ryan, a professor of mechanical engineering at the University of Minnesota from 1931 to 1963. Ryan's "Flight Recorder" patent was filed in August 1953 and approved on November 8, 1960, as US Patent 2,959,459. A second patent by Ryan for a "Coding Apparatus For Flight Recorders and the Like" is US Patent 3,075,192 dated January 22, 1963. An early prototype of the Ryan Flight Data Recorder is described in the January 2013 Aviation History article "Father of the Black Box" by Scott M. Fisher.Ryan, also the inventor of the retractable safety seat belt now required in automobiles, began working on the idea of a flight recorder in 1946, and invented the device in response to a 1948 request from the Civil Aeronautics Board aimed at establishing operating procedures to reduce air mishaps. The requirement was for a means of accumulating flight data. The original device was known as the "General Mills Flight Recorder".
History:
The benefits of the flight recorder and the coding apparatus for flight recorders were outlined by Ryan in his study entitled "Economies in Airline Operation with Flight Recorders" which was entered into the Congressional Record in 1956. Ryan's flight recorder maintained a continuing recording of aircraft flight data such as engine exhaust temperature, fuel flow, aircraft velocity, altitude, control surfaces positions, and rate of descent.
History:
A "Cockpit Sound Recorder" (CSR) was independently invented and patented by Edmund A. Boniface Jr., an aeronautical engineer at Lockheed Aircraft Corporation. He originally filed with the US Patent Office on February 2, 1961, as an "Aircraft Cockpit Sound Recorder". The 1961 invention was viewed by some as an "invasion of privacy". Subsequently Boniface filed again on February 4, 1963, for a "Cockpit Sound Recorder" (US Patent 3,327,067) with the addition of a spring-loaded switch which allowed the pilot to erase the audio/sound tape recording at the conclusion of a safe flight and landing.
History:
Boniface's participation in aircraft crash investigations in the 1940s and in the accident investigations of the loss of one of the wings at cruise altitude on each of two Lockheed Electra turboprop powered aircraft (Flight 542 operated by Braniff Airlines in 1959 and Flight 710 operated by Northwest Orient Airlines in 1961) led to his wondering what the pilots may have said just prior to the wing loss and during the descent as well as the type and nature of any sounds or explosions that may have preceded or occurred during the wing loss.His patent was for a device for recording audio of pilot remarks and engine or other sounds to be "contained with the in-flight recorder within a sealed container that is shock mounted, fireproofed and made watertight" and "sealed in such a manner as to be capable of withstanding extreme temperatures during a crash fire". The CSR was an analog device which provided a continuous erasing/recording loop (lasting 30 or more minutes) of all sounds (explosion, voice, and the noise of any aircraft structural components undergoing serious fracture and breakage) which could be overheard in the cockpit.On November 1, 1966, Bobbie R. Allen - director of Bureau of Safety, Civil Aeronautics Board and John S. Leak - chief of Technical Services Section, presented "The Potential Role of Flight Recorders in Aircraft Accident Investigation" at the AIAA/CASI Joint Meeting on Aviation Safety, Toronto, Canada.
Terminology:
The term "black box" was a World War II British phrase, originating with the development of radio, radar, and electronic navigational aids in British and Allied combat aircraft. These often-secret electronic devices were encased in non-reflective black boxes or housings. The earliest identified reference to "black boxes" occurs in a May 1945 Flight article, "Radar for Airlines", describing the application of wartime RAF radar and navigational aids to civilian aircraft: "The stowage of the 'black boxes' and, even more important, the detrimental effect on performance of external aerials, still remain as a radio and radar problem." (The term "black box" is used with a different meaning in science and engineering, describing a system exclusively by its inputs and outputs, with no information whatsoever about its inner workings.) Magnetic tape and wire voice recorders had been tested on RAF and USAAF bombers by 1943 thus adding to the assemblage of fielded and experimental electronic devices employed on Allied aircraft. As early as 1944 aviation writers envisioned use of these recording devices on commercial aircraft to aid incident investigations. When modern flight recorders were proposed to the British Aeronautical Research Council in 1958, the term "black box" was in colloquial use by experts.By 1967 when flight recorders were mandated by leading aviation countries, the expression had found its way into general use: "These so-called 'black boxes' are, in fact, of fluorescent flame-orange in colour." The formal names of the devices are flight data recorder and cockpit voice recorder. The recorders must be housed in boxes that are bright orange in color to make them more visually conspicuous in the debris after an accident.
Components:
Flight data recorder A flight data recorder (FDR; also ADR, for accident data recorder) is an electronic device employed to record instructions sent to any electronic systems on an aircraft.
Components:
The data recorded by the FDR are used for accident and incident investigation. Due to their importance in investigating accidents, these ICAO-regulated devices are carefully engineered and constructed to withstand the force of a high speed impact and the heat of an intense fire. Contrary to the popular term "black box", the exterior of the FDR is coated with heat-resistant bright orange paint for high visibility in wreckage, and the unit is usually mounted in the aircraft's tail section, where it is more likely to survive a crash. Following an accident, the recovery of the FDR is usually a high priority for the investigating body, as analysis of the recorded parameters can often detect and identify causes or contributing factors.Modern day FDRs receive inputs via specific data frames from the flight-data acquisition units. They record significant flight parameters, including the control and actuator positions, engine information and time of day. There are 88 parameters required as a minimum under current US federal regulations (only 29 were required until 2002), but some systems monitor many more variables. Generally each parameter is recorded a few times per second, though some units store "bursts" of data at a much higher frequency if the data begin to change quickly. Most FDRs record approximately 17–25 hours of data in a continuous loop. It is required by regulations that an FDR verification check (readout) is performed annually in order to verify that all mandatory parameters are recorded. Many aircraft today are equipped with an "event" button in the cockpit that could be activated by the crew if an abnormality occurs in flight. Pushing the button places a signal on the recording, marking the time of the event.Modern FDRs are typically double wrapped in strong corrosion-resistant stainless steel or titanium, with high-temperature insulation inside. Modern FDRs are accompanied by an underwater locator beacon that emits an ultrasonic "ping" to aid in detection when submerged. These beacons operate for up to 30 days and are able to operate while immersed to a depth of up to 6,000 meters (20,000 ft).
Components:
Cockpit voice recorder A cockpit voice recorder (CVR) is a flight recorder used to record the audio environment in the flight deck of an aircraft for the purpose of investigation of accidents and incidents. This is typically achieved by recording the signals of the microphones and earphones of the pilots' headsets and of an area microphone in the roof of the cockpit. The current applicable FAA TSO is C123b titled Cockpit Voice Recorder Equipment.Where an aircraft is required to carry a CVR and uses digital communications the CVR is required to record such communications with air traffic control unless this is recorded elsewhere. As of 2008 it is an FAA requirement that the recording duration is a minimum of two hours. The European Aviation Safety Agency increased the recording duration to 25 hours in 2021. In 2023 the FAA proposed extending requirements to 25 hours to help in investigations like runway incursions.A standard CVR is capable of recording four channels of audio data for a period of two hours. The original requirement was for a CVR to record for 30 minutes, but this has been found to be insufficient in many cases because significant parts of the audio data needed for a subsequent investigation occurred more than 30 minutes before the end of the recording.The earliest CVRs used analog wire recording, later replaced by analog magnetic tape. Some of the tape units used two reels, with the tape automatically reversing at each end. The original was the ARL Flight Memory Unit produced in 1957 by Australian David Warren and instrument maker Tych Mirfield.Other units used a single reel, with the tape spliced into a continuous loop, much as in an 8-track cartridge. The tape would circulate and old audio information would be overwritten every 30 minutes. Recovery of sound from magnetic tape often proves difficult if the recorder is recovered from water and its housing has been breached. Thus, the latest designs employ solid-state memory and use fault tolerant digital recording techniques, making them much more resistant to shock, vibration and moisture. With the reduced power requirements of solid-state recorders, it is now practical to incorporate a battery in the units, so that recording can continue until flight termination, even if the aircraft electrical system fails.
Components:
Like the FDR, the CVR is typically mounted in the rear of the airplane fuselage to maximize the likelihood of its survival in a crash.
Combined units With the advent of digital recorders, the FDR and CVR can be manufactured in one fireproof, shock proof, and waterproof container as a combined digital cockpit voice and data recorder (CVDR). Currently, CVDRs are manufactured by L3Harris Technologies and Hensoldt among others.
Solid state recorders became commercially practical in 1990, having the advantage of not requiring scheduled maintenance and making the data easier to retrieve. This was extended to the two-hour voice recording in 1995.
Components:
Additional equipment Since the 1970s, most large civil jet transports have been additionally equipped with a "quick access recorder" (QAR). This records data on a removable storage medium. Access to the FDR and CVR is necessarily difficult because they must be fitted where they are most likely to survive an accident; they also require specialized equipment to read the recording. The QAR recording medium is readily removable and is designed to be read by equipment attached to a standard desktop computer. In many airlines, the quick access recordings are scanned for "events", an event being a significant deviation from normal operational parameters. This allows operational problems to be detected and eliminated before an accident or incident results.
Components:
A flight-data acquisition unit (FDAU) is a unit that receives various discrete, analog and digital parameters from a number of sensors and avionic systems and then routes them to the FDR and, if installed, to the QAR. Information from the FDAU to the FDR is sent via specific data frames, which depend on the aircraft manufacturer. Many modern aircraft systems are digital or digitally controlled. Very often, the digital system will include built-in test equipment which records information about the operation of the system. This information may also be accessed to assist with the investigation of an accident or incident.
Specifications:
The design of today's FDR is governed by the internationally recognized standards and recommended practices relating to flight recorders which are contained in ICAO Annex 6 which makes reference to industry crashworthiness and fire protection specifications such as those to be found in the European Organisation for Civil Aviation Equipment documents EUROCAE ED55, ED56 Fiken A and ED112 (Minimum Operational Performance Specification for Crash Protected Airborne Recorder Systems). In the United States, the Federal Aviation Administration (FAA) regulates all aspects of US aviation, and cites design requirements in their Technical Standard Order, based on the EUROCAE documents (as do the aviation authorities of many other countries).
Specifications:
Currently, EUROCAE specifies that a recorder must be able to withstand an acceleration of 3400 g (33 km/s2) for 6.5 milliseconds. This is roughly equivalent to an impact velocity of 270 knots (310 mph; 500 km/h) and a deceleration or crushing distance of 45 cm (18 in). Additionally, there are requirements for penetration resistance, static crush, high and low temperature fires, deep sea pressure, sea water immersion, and fluid immersion.
Specifications:
EUROCAE ED-112 (Minimum Operational Performance Specification for Crash Protected Airborne Recorder Systems) defines the minimum specification to be met for all aircraft requiring flight recorders for recording of flight data, cockpit audio, images and CNS / ATM digital messages and used for investigations of accidents or incidents. When issued in March 2003 ED-112 superseded previous ED-55 and ED-56A that were separate specifications for FDR and CVR. FAA TSOs for FDR and CVR reference ED-112 for characteristics common to both types.
Specifications:
In order to facilitate recovery of the recorder from an aircraft accident site they are required to be coloured bright yellow or orange with reflective surfaces. All are lettered "Flight recorder do not open" on one side in English and "Enregistreur de vol ne pas ouvrir" in French on the other side. To assist recovery from submerged sites they must be equipped with an underwater locator beacon which is automatically activated in the event of an accident.
Regulation:
In the investigation of the 1960 crash of Trans Australia Airlines Flight 538 at Mackay, Queensland, the inquiry judge strongly recommended that flight recorders be installed in all Australian airliners. Australia became the first country in the world to make cockpit-voice recording compulsory.
Regulation:
The United States' first CVR rules were passed in 1964, requiring all turbine and piston aircraft with four or more engines to have CVRs by March 1, 1967. As of 2008 it is an FAA requirement that the CVR recording duration is a minimum of two hours, following the NTSB recommendation that it should be increased from its previously-mandated 30-minute duration. From 2014 the United States requires flight data recorders and cockpit voice recorders on aircraft that have 20 or more passenger seats, or those that have six or more passenger seats, are turbine-powered, and require two pilots.For US air carriers and manufacturers, the National Transportation Safety Board (NTSB) is responsible for investigating accidents and safety-related incidents. The NTSB also serves in an advisory role for many international investigations not under its formal jurisdiction. The NTSB does not have regulatory authority, but must depend on legislation and other government agencies to act on its safety recommendations. In addition, 49 USC Section 1114(c) prohibits the NTSB from making the audio recordings public except by written transcript.The ARINC Standards are prepared by the Airlines Electronic Engineering Committee (AEEC). The 700 Series of standards describe the form, fit, and function of avionics equipment installed predominately on transport category aircraft. The FDR is defined by ARINC Characteristic 747. The CVR is defined by ARINC Characteristic 757.
Regulation:
Proposed requirements Deployable recorders The NTSB recommended in 1999 that operators be required to install two sets of CVDR systems, with the second CVDR set being "deployable or ejectable". The "deployable" recorder combines the cockpit voice/flight data recorders and an emergency locator transmitter (ELT) in a single unit. The "deployable" unit would depart the aircraft before impact, activated by sensors. The unit is designed to "eject" and "fly" away from the crash site, to survive the terminal velocity of fall, to float on water indefinitely, and would be equipped with satellite technology for immediate location of crash impact site. The "deployable" CVDR technology has been used by the US Navy since 1993. While the recommendations would involve a massive, expensive retrofit program, government funding would meet cost objections from manufacturers and airlines. Operators would get both sets of recorders (including the currently-used fixed recorder) free of charge. The cost of the second "deployable/ejectable CVDR" (or "black box") was estimated at US$30 million for installation in 500 new aircraft (about $60,000 per new commercial plane).In the United States, the proposed SAFE Act calls for implementing the NTSB 1999 recommendations. However, so far the SAFE Act legislation has failed to pass Congress, having been introduced in 2003 (H.R. 2632), in 2005 (H.R. 3336), and in 2007 (H.R. 4336). Originally the "Safe Aviation Flight Enhancement (SAFE) Act of 2003" was introduced on June 26, 2003, by Congressman David Price (D-NC) and Congressman John Duncan (R-Tenn.) in a bipartisan effort to ensure investigators have access to information immediately following commercial accidents.On July 19, 2005, a revised SAFE Act was introduced and referred to the Committee on Transportation and Infrastructure of the US House of Representatives. The bill was referred to the House Subcommittee on Aviation during the 108th, 109th, and 110th Congresses.
Regulation:
After Malaysia Airlines Flight 370 In the United States, on March 12, 2014, in response to the missing Malaysia Airlines Flight 370, David Price re-introduced the SAFE Act in the US House of Representatives.The disappearance of Malaysia Airlines Flight 370 demonstrated the limits of the contemporary flight recorder technology, namely how physical possession of the flight recorder device is necessary to help investigate the cause of an aircraft incident. Considering the advances of modern communication, technology commentators called for flight recorders to be supplemented or replaced by a system that provides "live streaming" of data from the aircraft to the ground. Furthermore, commentators called for the underwater locator beacon's range and battery life to be extended, as well as the outfitting of civil aircraft with the deployable flight recorders typically used in military aircraft. Previous to MH370, the investigators of 2009 Air France Flight 447 urged that the battery life be extended as "rapidly as possible" after the crash's flight recorders went unrecovered for over a year.
Regulation:
After Indonesia AirAsia Flight 8501 On December 28, 2014, Indonesia AirAsia Flight 8501, en route from Surabaya, Indonesia, to Singapore, crashed in bad weather, killing all 155 passengers and seven crew on board.On January 8, 2015, before the recovery of the flight recorders, an anonymous ICAO representative said: "The time has come that deployable recorders are going to get a serious look." A second ICAO official said that public attention had "galvanized momentum in favour of ejectable recorders on commercial aircraft".
Regulation:
Boeing 737 MAX Live flight data streaming as on the Boeing 777F ecoDemonstrator, plus 20 minutes of data before and after a triggering event, could have removed the uncertainty before the Boeing 737 MAX groundings following the March 2019 Ethiopian Airlines Flight 302 crash.
Regulation:
Image recorders The NTSB has asked for the installation of cockpit image recorders in large transport aircraft to provide information that would supplement existing CVR and FDR data in accident investigations. They have recommended that image recorders be placed into smaller aircraft that are not required to have a CVR or FDR. The rationale is that what is seen on an instrument by the pilots of an aircraft is not necessarily the same as the data sent to the display device. This is particularly true of aircraft equipped with electronic displays (CRT or LCD). A mechanical instrument panel is likely to preserve its last indications, but this is not the case with an electronic display. Such systems, estimated to cost less than $8,000 installed, typically consist of a camera and microphone located in the cockpit to continuously record cockpit instrumentation, the outside viewing area, engine sounds, radio communications, and ambient cockpit sounds. As with conventional CVRs and FDRs, data from such a system is stored in a crash-protected unit to ensure survivability. Since the recorders can sometimes be crushed into unreadable pieces, or even located in deep water, some modern units are self-ejecting (taking advantage of kinetic energy at impact to separate themselves from the aircraft) and also equipped with radio emergency locator transmitters and sonar underwater locator beacons to aid in their location.
Cultural references:
The artwork for the band Rammstein's album Reise, Reise is made to look like a CVR; it also includes a recording from a crash. The recording is from the last 1–2 minutes of the CVR of Japan Airlines Flight 123, which crashed on August 12, 1985, killing 520 people; JAL123 is the deadliest single-aircraft disaster in history.
Cultural references:
Members of the performing arts collective Collective:Unconscious made a theatrical presentation of a play called Charlie Victor Romeo with a script based on transcripts from CVR voice recordings of nine aircraft emergencies. The play features the famous United Airlines Flight 232 that crash-landed in a cornfield near Sioux City, Iowa, after suffering a catastrophic failure of one engine and most flight controls.
Cultural references:
Survivor, a novel by American author Chuck Palahniuk, is about a cult member who dictates his life story to a flight recorder before the plane runs out of fuel and crashes.
In stand-up comedy, many jokes have been made asking why the entire airplane isn't made out of the material used to make black boxes, given that the black box survives the crash. This is referenced in the 2001 Chris Rock movie Down to Earth, although the original joke is widely credited to George Carlin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Brain vesicle**
Brain vesicle:
Brain vesicles are the bulge-like features of the early development of the neural tube in vertebrates. Vesicle formation begins shortly after anterior neural tube closure at about embryonic day 9.0 in the mouse and the fourth and fifth gestational week in human development. In zebrafish and chicken embryos, brain vesicles form by about 24 hours and 48 hours post-conception, respectively. Initially there are three primary brain vesicles: prosencephalon, mesencephalon, and rhombencephalon. These develop into five secondary brain vesicles – the prosencephalon is subdivided into the telencephalon and diencephalon, and the rhombencephalon into the metencephalon and myelencephalon. During these early vesicle stages, the walls of the neural tube contain neural stem cells in a region called the neuroepithelium or ventricular zone. These neural stem cells divide rapidly, driving growth of the early brain, but later, these stem cells begin to generate neurons through the process of neurogenesis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Local usage details**
Local usage details:
Local usage details (LUD) are a detailed record of local calls made and received from a particular phone number. These records are regularly available to police in the United States and Canada with a court order, and were traditionally subject to the same restrictions as telephone tapping.
LUDs may be legally used by the police without first obtaining a warrant, as determined by Smith v. Maryland (1979). Other terms for call records include CDR (call detail records) or SMDR (station message detail recordings). These terms normally apply to "raw call records" before they have been processed to apply locations and rates. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Permanganic acid**
Permanganic acid:
Permanganic acid (or manganic(VII) acid) is the inorganic compound with the formula HMnO4. This strong oxoacid has been isolated as its dihydrate. It is the conjugate acid of permanganate salts. It is the subject of few publications and its characterization as well as its uses are very limited.
Preparation and structure:
Permanganic acid is most often prepared by the reaction of dilute sulfuric acid with a solution of barium permanganate, the insoluble barium sulfate byproduct being removed by filtering: Ba(MnO4)2 + H2SO4 → 2 HMnO4 + BaSO4↓The sulfuric acid used must be dilute; reactions of permanganates with concentrated sulfuric acid yield the anhydride, manganese heptoxide.
Preparation and structure:
Permanganic acid has also been prepared through the reaction of hydrofluorosilicic acid with potassium permanganate, through electrolysis, and through hydrolysis of manganese heptoxide, though the last route often results in explosions.Crystalline permanganic acid has been prepared at low temperatures as the dihydrate, HMnO4·2H2O.Although its structure has not been verified spectroscopically or crystallographically, HMnO4 is assumed to be adopt a tetrahedral structure akin to that for perchloric acid.
Reactions:
As a strong acid, HMnO4 is deprotonated to form the intensely purple coloured permanganates. Potassium permanganate, KMnO4, is a widely used, versatile and powerful oxidising agent.
Permanganic acid solutions are unstable, and gradually decompose into manganese dioxide, oxygen, and water, with initially formed manganese dioxide catalyzing further decomposition. Decomposition is accelerated by heat, light, and acids. Concentrated solutions decompose more rapidly than dilute. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tricalcium aluminate**
Tricalcium aluminate:
Tricalcium aluminate Ca3Al2O6, often formulated as 3CaO·Al2O3 to highlight the proportions of the oxides from which it is made, is the most basic of the calcium aluminates. It does not occur in nature, but is an important mineral phase in Portland cement.
Properties:
Pure tricalcium aluminate is formed when the appropriate proportions of finely divided calcium oxide and aluminium oxide are heated together above 1300 °C. The pure form is cubic, with unit cell dimension 1.5263 nm and has density 3064 kg·m−3. It melts with decomposition at 1542 °C. The unit cell contains 8 cyclic Al6O1818− anions, which can be considered to consist of 6 corner sharing AlO4 tetrahedra. The structure of pure liquid tricalcium aluminate contains mostly AlO4 tetrahedra in an infinite network, with a slightly higher concentration of bridging oxygens than expected from the composition and around 10% unconnected AlO4 monomers and Al2O7 dimers.In Portland cement clinker, tricalcium aluminate occurs as an "interstitial phase", crystallizing from the melt. Its presence in clinker is solely due to the need to obtain liquid at the peak kiln processing temperature (1400–1450 °C), facilitating the formation of the desired silicate phases. Apart from this benefit, its effects on cement properties are mostly undesirable. It forms an impure solid solution phase, with 15-20% of the aluminium atoms replaced by silicon and iron, and with variable amounts of alkali metal atoms replacing calcium, depending upon the availability of alkali oxides in the melt. The impure form has at least four polymorphs: Typical chemical compositions are:
Effect on cement properties:
In keeping with its high basicity, tricalcium aluminate reacts most strongly with water of all the calcium aluminates, and it is also the most reactive of the Portland clinker phases. Its hydration to phases of the form Ca2AlO3(OH) · n H2O leads to the phenomenon of "flash set" (instantaneous set), and a large amount of heat is generated. To avoid this, Portland-type cements include a small addition of calcium sulfate (typically 4-8%). Sulfate ions in solution lead to the formation of an insoluble layer of ettringite (3CaO • Al2O3 • 3CaSO4 · 32 H2O over the surface of the aluminate crystals, passivating them. The aluminate then reacts slowly to form AFm phases of general composition 3CaO • Al2O3 • CaSO4 · 12 H2O. These hydrates contribute little to strength development.
Effect on cement properties:
Tricalcium aluminate is associated with three important effects that can reduce the durability of concrete: heat release, which can cause spontaneous overheating in large masses of concrete. Where necessary, tricalcium aluminate levels are reduced to control this effect.
Effect on cement properties:
sulfate attack, in which sulfate solutions to which the concrete is exposed react with the AFm phases to form ettringite. This reaction is expansive, and can disrupt mature concrete. Where concrete is to be placed in contact with, for example, sulfate-laden ground waters, either a "sulfate-resisting" cement (with low levels of tricalcium aluminate) is used, or slag is added to the cement or to the concrete mix. The slag contributes sufficient aluminium to suppress formation of ettringite.
Effect on cement properties:
delayed ettringite formation, where concrete is cured at temperatures above the decomposition temperature of ettringite (about 65 °C). On cooling, expansive ettringite formation takes place.Because they are even more basic, the alkali-loaded polymorphs are correspondingly more reactive. Appreciable amounts (>1%) in cement make set control difficult, and the cement becomes excessively hygroscopic. The cement powder flowability is reduced, and air-set lumps tend to form. They withdraw water from gypsum on storage of the cement, leading to false set. For this reason, their formation is avoided wherever possible. It is more energetically favorable for sodium and potassium to form sulfates and chlorides in the kiln, but if insufficient sulfate ion is present, any surplus alkalis congregate in the aluminate phase. The feed and fuel in the kiln system are preferably controlled chemically to keep the sulfate and alkalis in balance. However, this stoichiometry is only maintained if there is substantial surplus oxygen in the kiln atmosphere: if "reducing conditions" set in, then sulfur is lost as SO2, and reactive aluminates start to form. This is readily monitored by tracking the clinker sulfate level on an hour-to-hour basis.
Hydration steps:
Water reacts instantly with tricalcium aluminate. Hydration likely begins already during grinding of cement clinker due to residual humidity and dehydration of gypsum additives. Initial contact with water causes protonation of single bonded oxygen atoms on aluminate rings and leads to the formation of calcium hydroxide. The next steps in the sequence of the hydration reaction involve the generated hydroxide ions as strong nucleophiles, which fully hydrolyze the ring structure in combination with water. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Discrete phase-type distribution**
Discrete phase-type distribution:
The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of an absorbing Markov chain with one absorbing state. Each of the states of the Markov chain represents one of the phases.
Discrete phase-type distribution:
It has continuous time equivalent in the phase-type distribution.
Definition:
A terminating Markov chain is a Markov chain where all states are transient, except one which is absorbing.
Definition:
Reordering the states, the transition probability matrix of a terminating Markov chain with m transient states is P=[TT00T1], where T is a m×m matrix, T0 and 0 are column vectors with m entries, and T0+T1=1 . The transition matrix is characterized entirely by its upper-left block T Definition. A distribution on {0,1,2,...} is a discrete phase-type distribution if it is the distribution of the first passage time to the absorbing state of a terminating Markov chain with finitely many states.
Characterization:
Fix a terminating Markov chain. Denote T the upper-left block of its transition matrix and τ the initial distribution.
The distribution of the first time to the absorbing state is denoted PHd(τ,T) or DPH(τ,T) Its cumulative distribution function is F(k)=1−τTk1, for k=1,2,...
, and its density function is f(k)=τTk−1T0, for k=1,2,...
. It is assumed the probability of process starting in the absorbing state is zero. The factorial moments of the distribution function are given by, E[K(K−1)...(K−n+1)]=n!τ(I−T)−nTn−11, where I is the appropriate dimension identity matrix.
Special cases:
Just as the continuous time distribution is a generalisation of the exponential distribution, the discrete time distribution is a generalisation of the geometric distribution, for example: Degenerate distribution, point mass at zero or the empty phase-type distribution – 0 phases.
Geometric distribution – 1 phase.
Negative binomial distribution – 2 or more identical phases in sequence.
Mixed Geometric distribution – 2 or more non-identical phases, that each have a probability of occurring in a mutually exclusive, or parallel, manner. This is the discrete analogue of the Hyperexponential distribution, but it is not called the Hypergeometric distribution, since that name is in use for an entirely different type of discrete distribution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Indefinite monism**
Indefinite monism:
Indefinite monism is a philosophical conception of reality that asserts that only awareness is real and that the wholeness of reality can be conceptually thought of in terms of immanent and transcendent aspects. The immanent aspect is denominated simply as awareness, while the transcendent aspect is referred to as omnific awareness. Awareness in this system is not equivalent to consciousness. Rather, awareness is the venue for consciousness, and the transcendent aspect of reality, omnific awareness, is what consciousness is of. In this system, what is real is distinguished from that which exists by showing that everything that we are conscious of exists but is not real since it is contingent upon awareness for its existence. Awareness is the source of its own energetic display -- its omneity. Rather than leading to a solipsistic account of reality, it is claimed through an analysis of consciousness that it is an error on our part to conceive of individuated awareness. That error being found in a conflation of the objects of consciousness with the subject of consciousness within an assumed form of reality of separate physical things. Proceeding from the one necessarily true and unquestionable fact – that we are present to our experiences – an understanding of reality is developed that is neither a materialist nor an idealist conceptualization. This way of viewing the world is referred to as surjective, a metaphorical use of a concept found in mathematical set theory that means a function that works upon every member of a set, where awareness is the function and omnific awareness is the set, in order to distinguish this position from both subjectivity and objectivity.
Indefinite monism:
Within this system anything whatsoever can arise from omnific awareness, thus the use of the term “indefinite” in labeling this monism. What does arise as the existents that we are conscious of is conditioned by the affections of awareness for its display. Thus this system does away with the idea of an active, creative force called free will and replaces it with an active volitional component known as affections, that does not itself create anything, whether movement or structure, but instead, constrains the possibilities of what arises naturally. Arguably, the concept of free will necessitates a world of separation as it implies an actor and that which is acted upon. In this conception there is no such separation. Yet our intuitive modeling of the existents of reality as arising from natural processes, as well as our intuitive understanding that we can ‘cause’ things to happen by our ‘will’, are both cleanly supported.
Indefinite monism:
The distinction between physical phenomena and mental phenomena is also removed by this system. Omnific awareness gives rise to everything – thus the use of the term omnific – and this includes thoughts that phenomenally arise in brains as well as existents that arise phenomenally as things in the world. By removing this distinction this system cuts off the inevitable paradoxes that otherwise arise in philosophical systems. The implications of this move create a number of novel, but necessary, modifications in current categorizations of ideas about reality and our study of it. For instance, ontology – the study of being – is necessitated by the assumption of a physical world of separate things, but when viewed surjectively ontology collapses into epistemology – the study of the methods or grounds of knowledge. Similarly, by removing the distinction between mental and physical phenomena the tensions created in dualist understandings of reality of how the mental and physical interact with one another are dispelled. Surprisingly, the removal of this distinction also completely removes the need for claims of metaphysical realms of being or metaphysical processes, thus collapsing all of reality into this reality.
Indefinite monism:
The implications of this view of reality are carried as far as ethics, where the lack of separation between awareness and that which it gives rise to necessitate a far-reaching adjustment in our ethical beliefs. One such difference that is highlighted for instance is that all conscious beings, which are called “knowings” in deference to this new conception of reality, are qualitatively the same; thus our current distinction between “human beings” and “animals” is based upon a false dichotomy, and this new understanding will necessitate an adjustment in our ideas of who or what can be “property” and who or what can be a “person.” | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Glacial erratic**
Glacial erratic:
A glacial erratic is glacially deposited rock differing from the type of rock native to the area in which it rests. Erratics, which take their name from the Latin word errare ("to wander"), are carried by glacial ice, often over distances of hundreds of kilometres. Erratics can range in size from pebbles to large boulders such as Big Rock (16,500 tonnes or 18,200 short tons) in Alberta.
Glacial erratic:
Geologists identify erratics by studying the rocks surrounding the position of the erratic and the composition of the erratic itself. Erratics are significant because: They can be transported by glaciers, and they are thereby one of a series of indicators which mark the path of prehistoric glacier movement. Their lithographic origin can be traced to the parent bedrock, allowing for confirmation of the ice flow route.
Glacial erratic:
They can be transported by ice rafting. This allows quantification of the extent of glacial flooding resulting from ice dam failure which release the waters stored in proglacial lakes such as Lake Missoula. Erratics released by ice rafts that were stranded and subsequently melted, dropping their load, allow characterization of the high-water marks for transient floods in areas like temporary Lake Lewis.
Glacial erratic:
Erratics dropped by icebergs melting in the ocean can be used to track Antarctic and Arctic-region glacial movements for periods prior to record retention. Also known as dropstones, these can be correlated with ocean temperatures and levels to better understand and calibrate models of the global climate.
Formation of erratics:
The term "erratic" is commonly used to refer to erratic blocks, which geologist Archibald Geikie describes as: "large masses of rock, often as big as a house, that have been transported by glacier ice, and have been lodged in a prominent position in the glacier valleys or have been scattered over hills and plains. And examination of their mineralogical character leads the identification of their sources...". In geology, an erratic is material moved by geologic forces from one location to another, usually by a glacier.
Formation of erratics:
Erratics are formed by glacial ice erosion resulting from the movement of ice. Glaciers erode by multiple processes: abrasion/scouring, plucking, ice thrusting and glacially induced spalling. Glaciers crack pieces of bedrock off in the process of plucking, producing the larger erratics. In an abrasion process, debris in the basal ice scrapes along the bed, polishing and gouging the underlying rocks, similar to sandpaper on wood, producing smaller glacial till. In ice thrusting, the glacier freezes to its bed, then as it surges forward, it moves large sheets of frozen sediment at the base along with the glacier. Glacially induced spalling occurs when ice lens formation with the rocks below the glacier spall off layers of rock, providing smaller debris which is ground into the glacial basal material to become till.
Formation of erratics:
Evidence supports another option for creation of erratics as well, rock avalanches onto the upper surface of the glacier (supraglacial). Rock avalanche–supraglacial transport occurs when the glacier undercuts a rock face, which fails by avalanche onto the upper surface of the glacier. The characteristics of rock avalanche–supraglacial transport includes: Monolithologic composition – a cluster of boulders of similar composition are frequently found in close proximity. Commingling of the multiple lithologies normally present throughout the glaciated basin, has not occurred.
Formation of erratics:
Angularity – the supraglacially transported rocks tend to be rough and irregular, with no sign of subglacial abrasion. The sides of boulders are roughly planar, suggesting that some surfaces may be original fracture planes.
Great size – the size distribution of the boulders tends to be skewed toward larger boulders than those produced subglacially.
Surficial positioning of the boulders – the boulders are positioned on the surface of glacial deposits, as opposed to partially or totally buried.
Restricted areal extents – the boulder fields tend to have limited areal extent; the boulders cluster together, consistent with the boulders landing on the surface of the glacier and subsequently deposited on top of the glacial drift.
Orientations – the boulders may be close enough that original fracture planes can be matched.
Locations of the boulder trains – the boulders appear in rows, trains or clusters along the lateral moraines as opposed to being located on the terminal moraine or in the general glacial field.
Formation of erratics:
Glacier-borne erratic Erratics provide an important tool in characterizing the directions of glacier flows, which are routinely reconstructed used on a combination of moraines, eskers, drumlins, meltwater channels and similar data. Erratic distributions and glacial till properties allow for identification of the source rock from which they derive, which confirms the flow direction, particularly when the erratic source outcrop is unique to a limited locality. Erratic materials may be transported by multiple glacier flows prior to their deposition, which can complicate the reconstruction of the glacial flow.
Formation of erratics:
Ice-rafted erratic Glacial ice entrains debris of varying sizes from small particles to extremely large masses of rock. This debris is transported to the coast by glacier ice and released during the production, drift and melting of icebergs. The rate of debris release by ice depends upon the size of the ice mass in which it is carried as well as the temperature of the ocean through which the ice floe passes.
Formation of erratics:
Sediments from the late Pleistocene period lying on the floor of the North Atlantic show a series of layers (referred to as Heinrich layers) which contain ice-rafted debris. They were formed between 14,000 and 70,000 years before the present. The deposited debris can be traced back to the origin by both the nature of the materials released and the continuous path of debris release. Some paths extend more than 3,000 kilometres (1,900 mi) distant from the point at which the ice floes originally broke free.The location and altitude of ice-rafted boulders relative to the modern landscape has been used to identify the highest level of water in proglacial lakes (e.g. Lake Musselshell in central Montana) and temporary lakes (e.g. Lake Lewis in Washington state). Ice-rafted debris is deposited when the iceberg strands on the shore and subsequently melts, or drops out of the ice floe as it melts. Hence all erratic deposits are deposited below the actual high water level of the lake; however, the measured altitude of ice-rafted debris can be used to estimate the lake surface elevation.
Formation of erratics:
This is accomplished by recognizing that on a fresh-water lake, the iceberg floats until the volume of its ice-rafted debris exceeds 5% of the volume of the iceberg. Therefore, a correlation between the iceberg size and the boulder size can be established. For example, a 1.5-metre-diameter (5 ft) boulder can be carried by a 3-metre-high (10 ft) iceberg and could be found stranded at higher elevations than a 2-metre (7 ft) boulder, which requires a 4-metre-high (13 ft) iceberg.
Formation of erratics:
Large erratics Large erratics consisting of slabs of bedrock that have been lifted and transported by glacier ice to subsequently be stranded above thin glacial or fluvioglacial deposits are referred to as glacial floes, rafts (schollen) or erratic megablocks. Erratic megablocks have typical length to thickness ratios on the order of 100 to 1. These megablocks may be found partially exposed or completely buried by till and are clearly allochthonous, since they overlay glacial till. Megablocks can be so large that they are mistaken for bedrock until underlying glacial or fluvial sediments are identified by drilling or excavation. Such erratic megablocks greater than 1 square kilometre (250 acres) in area and 30 metres (98 ft) in thickness can be found on the Canadian Prairies, Poland, England, Denmark and Sweden. One erratic megablock located in Saskatchewan is 30 by 38 kilometres (19 mi × 24 mi) (and up to 100 metres or 330 feet thick). Their sources can be identified by locating the bedrock from which they were separated; several rafts from Poland and Alberta were determined to have been transported over 300 kilometres (190 mi) from their source.
Formation of erratics:
Nonglacial erratics In geology an erratic is any material which is not native to the immediate locale but has been transported from elsewhere. The most common examples of erratics are associated with glacial transport, either by direct glacier-borne transport or by ice rafting. However, other erratics have been identified as the result of kelp holdfasts, which have been documented to transport rocks up to 40 centimetres (16 in) in diameter, rocks entangled in the roots of drifting logs, and even in transport of stones accumulated in the stomachs of pinnipeds during foraging.
History:
During the 18th century, erratics were deemed a major geological paradox. Geologists identify erratics by studying the rocks surrounding the position of the erratic and the rock of the erratic itself. Erratics were once considered evidence of a biblical flood, but in the 19th century scientists gradually came to accept that erratics pointed to an ice age in Earth's past. Among others, the Swiss politician, jurist and theologian Bernhard Friedrich Kuhn saw glaciers as a possible solution as early as 1788. However, the idea of ice ages and glaciation as a geological force took a while to be accepted. Ignaz Venetz (1788–1859), a Swiss engineer, naturalist and glaciologist was one of the first scientists to recognize glaciers as a major force in shaping the earth.
History:
In the 19th century, many scientists came to favor erratics as evidence for the end of the Last Glacial Maximum (ice age) 10,000 years ago, rather than a flood. Geologists have suggested that landslides or rockfalls initially dropped the rocks on top of glacial ice. The glaciers continued to move, carrying the rocks with them. When the ice melted, the erratics were left in their present locations.
History:
Charles Lyell's Principles of Geology (v. 1, 1830) provided an early description of the erratic which is consistent with the modern understanding. Louis Agassiz was the first to scientifically propose that the Earth had been subject to a past ice age. In the same year, he was elected a foreign member of the Royal Swedish Academy of Sciences. Prior to this proposal, Goethe, de Saussure, Venetz, Jean de Charpentier, Karl Friedrich Schimper and others had made the glaciers of the Alps the subjects of special study, and Goethe, Charpentier as well as Schimper had even arrived at the conclusion that the erratic blocks of alpine rocks scattered over the slopes and summits of the Jura Mountains had been moved there by glaciers.
History:
Charles Darwin published extensively on geologic phenomena including the distribution of erratic boulders. In his accounts written during the voyage of HMS Beagle, Darwin observed a number of large erratic boulders of notable size south of the Strait of Magellan, Tierra del Fuego and attributed them to ice rafting from Antarctica. Recent research suggests that they are more likely the result of glacial ice flows carrying the boulders to their current locations.
Examples:
Glacier-borne erratics Australia Exhumed erratics eroded from unconsolidated 270 Ma Permian glacial sediments can be found on the beach and in the park at Hallett Cove Conservation Park just south of Adelaide, and in other South Australian locations, such as Inman Valley.
Canada Big Rock near Okotoks, Alberta, Canada. It is the largest erratic in the Foothills Erratics Train.
Bleasdell Boulder, southern Ontario, was described as "glacial" in origin by Reverend William Bleasdell in 1872.
The Foothills Erratics Train is a deposit of rocks of many sizes. These deposits stretch in a narrow belt for about 600 kilometres (370 mi) from Alberta's Athabasca River Valley to the southwest of the province.
White Rock, British Columbia gets its name from a coastal erratic the size of a garage found on the beach at Semiahmoo Bay, right at the border with Washington.
Examples:
Boulder in Green Timbers Urban Forest in Surrey, British Columbia, described as a glacial erratic on the city website Estonia Ehalkivi (Sunset Glow Boulder) near Letipea, Estonia is the largest erratic boulder in the glaciation area of North Europe. Height 7 m, circumference 48.2 m, a volume of 930 m3 and a mass of approx 2,500 tonnes Finland Kukkarokivi, located close to Turku at the Ruissalo island in Southwest Finland. It is the largest in Finland; length about 40 m, width about 30 m, height 12 m, weight about 36,000 tonnes.
Examples:
Germany Colossus of Ostermunzel, Lower Saxony Der Alte Schwede, found during dredging of the river Elbe near Hamburg in 1999; oldest in Germany Giebichenstein, Stöckse, Lower Saxony Glacial erratics on and around Rügen Republic of Ireland The Clonfinlough Stone, located in central Ireland, is covered with Bronze Age and medieval carvings Latvia Nīcgale Great Stone, located in Nīcgale Parish Brodu quarry stone, located in Jēkabpils Lauču stone, located in Lauči, Skulte parish, Limbaži municipality. Believed to have separated from a glacier in the Vyborg area of Southern Finland and Russia.
Examples:
Lithuania Puntukas, one of national symbols of Lithuania, near Anykščiai city Poland Trygław, Tychowo Devil Stone, Kashubia United Kingdom England The Crosby Erratic, Coronation Park, Liverpool, England; unearthed in a field nearby in 1898.
The Great Stone of Fourstones, at the county boundary between North Yorkshire and Lancashire, has fifteen steps carved in its side to enable it to be climbed.The Hitching Stone in North Yorkshire. It is the largest erratic block in the county.
The Merton Stone, Merton, Norfolk The Norber erratics in the Yorkshire Dales are one of England's finest sets of glacial erratics.
Soulbury Stone, located in Soulbury, Buckinghamshire Scotland Jim Crow Rock, glacial erratic in Hunters Quay, situated on the foreshore of the Firth of Clyde. The rock has been the subject of controversy because of an allegedly racist face painted on it.
Northern Ireland Cloughmore, near Rostrevor in County Down, Northern Ireland, is a glacial erratic found on the mountain high above the village. Located on the slopes of Slieve Martin, the stone overlooks Carlingford Lough and the Cooley Peninsula.
United States Bubble Rock, perched on the edge of a cliff near the summit of South Bubble mountain in Acadia National Park, Maine.
Doane Rock, the largest exposed boulder in Cape Cod, Massachusetts.
Fantastic Erratic, a fern-covered erratic the size of a two-car garage, is found on Cougar Mountain near Seattle.
Glen Rock, a boulder weighing 570 short tons (520 t) in Glen Rock, New Jersey, believed to have been carried to the site by a glacier that picked up the rock 15,000 years ago near Peekskill, New York.
Indian Rock in Montebello, New York is a large glacial erratic boulder of granite gneiss, formed in the Proterozoic (Precambrian) era, 1.2 billion to 800 million years ago. It is estimated to weigh ≈17,300 tons.
Madison Boulder, a 5,000-short-ton (4,500 t) glacial erratic the size of a large house in Madison, New Hampshire.
Olmsted Point in Yosemite National Park is noteworthy for having granite hills covered in numerous small glacial erratics.
Plymouth Rock, the site in Plymouth, Massachusetts on which the Mayflower Pilgrims landed in 1620. It is an important symbol in American history.
Rollstone Boulder, a 110-ton porphyritic granite boulder that was originally located at the summit of Rollstone Hill in Fitchburg, Massachusetts. It was carried by the last glaciation from central New Hampshire. Threatened by quarrying operations, it was moved to Litchfield Park in downtown Fitchburg in 1929–1930.
The northern portion of the town of Waterville, Washington has a large number of large basalt erratics, particularly along the moraine running east–west from McNeil Canyon.
Tripod Rock in Kinnelon, New Jersey is noteworthy for being perched on three smaller boulders.
Balance Rock in Princeton, Massachusetts is located at the base of Mount Wachusett, on the northwest side.
Examples:
Flood-borne erratics If glacial ice is "rafted" by a flood such as that created when the ice dam broke during the Missoula floods, then the erratics are deposited where the ice finally releases its debris load. One of the more unusual examples is found far from its origin in Idaho at Erratic Rock State Natural Site just outside McMinnville, Oregon. The park includes a 40-short-ton (36 t) specimen, the largest erratic found in the Willamette Valley. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Von Hippel–Lindau disease**
Von Hippel–Lindau disease:
Von Hippel–Lindau disease (VHL), also known as Von Hippel–Lindau syndrome, is a rare genetic disorder with multisystem involvement. It is characterized by visceral cysts and benign tumors with potential for subsequent malignant transformation. It is a type of phakomatosis that results from a mutation in the Von Hippel–Lindau tumor suppressor gene on chromosome 3p25.3.
Signs and symptoms:
Signs and symptoms associated with VHL disease include headaches, problems with balance and walking, dizziness, weakness of the limbs, vision problems, and high blood pressure.
Signs and symptoms:
Conditions associated with VHL disease include angiomatosis, hemangioblastomas, pheochromocytoma, renal cell carcinoma, pancreatic cysts (pancreatic serous cystadenoma), endolymphatic sac tumor, and bilateral papillary cystadenomas of the epididymis (men) or broad ligament of the uterus (women). Angiomatosis occurs in 37.2% of patients presenting with VHL disease and usually occurs in the retina. As a result, loss of vision is very common. However, other organs can be affected: strokes, heart attacks, and cardiovascular disease are common additional symptoms. Approximately 40% of VHL disease presents with CNS hemangioblastomas and they are present in around 60-80%. Spinal hemangioblastomas are found in 13-59% of VHL disease and are specific because 80% are found in VHL disease. Although all of these tumours are common in VHL disease, around half of cases present with only one tumour type.
Pathogenesis:
The disease is caused by mutations of the Von Hippel–Lindau tumor suppressor (VHL) gene on the short arm of chromosome 3 (3p25-26). There are over 1500 germline mutations and somatic mutations found in VHL disease.
Pathogenesis:
Every cell in the body has 2 copies of every gene (bar those found in the sex chromosomes, X and Y). In VHL disease, one copy of the VHL gene has a mutation and produces a faulty VHL protein (pVHL). However, the second copy still produces a functional protein. The condition is inherited in an autosomal dominant manner - one copy of the faulty gene is sufficient to increase the risk of developing tumours.Approximately 20% of cases of VHL disease are found in individuals without a family history, known as de novo mutations. An inherited mutation of the VHL gene is responsible for the remaining 80 percent of cases.30-40% of mutations in the VHL gene consist of 50-250kb deletion mutations that remove either part of the gene or the whole gene and flanking regions of DNA. The remaining 60-70% of VHL disease is caused by the truncation of pVHL by nonsense mutations, indel mutations or splice site mutations.
Pathogenesis:
VHL protein The VHL protein (pVHL) is involved in the regulation of a protein known as hypoxia inducible factor 1α (HIF1α). This is a subunit of a heterodimeric transcription factor that at normal cellular oxygen levels is highly regulated. In normal physiological conditions, pVHL recognizes and binds to HIF1α only when oxygen is present due to the post translational hydroxylation of 2 proline residues within the HIF1α protein. pVHL is an E3 ligase that ubiquitinates HIF1α and causes its degradation by the proteasome. In low oxygen conditions or in cases of VHL disease where the VHL gene is mutated, pVHL does not bind to HIF1α. This allows the subunit to dimerise with HIF1β and activate the transcription of a number of genes, including vascular endothelial growth factor, platelet-derived growth factor B, erythropoietin and genes involved in glucose uptake and metabolism. A new novel missense mutation in VHL genes c.194 C>T, c.239 G>A, c.278 G>A, c.319 C>G, c.337 C>G leading to the following variations p.Ala 65 Val, p.Gly 80 Asp, p.Gly 93 Glu, p.Gln 107 Glu, p.Gln 113 Glu in the protein contributed to renal clear cell carcinoma.
Diagnosis:
The detection of tumours specific to VHL disease is important in the disease's diagnosis. In individuals with a family history of VHL disease, one hemangioblastoma, pheochromocytoma or renal cell carcinoma may be sufficient to make a diagnosis. As all the tumours associated with VHL disease can be found sporadically, at least two tumours must be identified to diagnose VHL disease in a person without a family history.Genetic diagnosis is also useful in VHL disease diagnosis. In hereditary VHL disease, techniques such as the Southern blot and gene sequencing can be used to analyse DNA and identify mutations. These tests can be used to screen family members of those afflicted with VHL disease; de novo cases that produce genetic mosaicism are more difficult to detect because mutations are not found in the white blood cells that are used for genetic analysis.
Diagnosis:
Classification VHL disease can be subdivided according to the clinical manifestations, although these groups often correlate with certain types of mutations present in the VHL gene.
Treatment:
Early recognition and treatment of specific manifestations of VHL can substantially decrease complications and improve quality of life. For this reason, individuals with VHL disease are usually screened routinely for retinal angiomas, CNS hemangioblastomas, clear-cell renal carcinomas and pheochromocytomas. CNS hemangioblastomas are usually surgically removed if they are symptomatic. Photocoagulation and cryotherapy are usually used for the treatment of symptomatic retinal angiomas, although anti-angiogenic treatments may also be an option. Renal tumours may be removed by a partial nephrectomy or other techniques such as radiofrequency ablation.Belzutifan is a drug under investigation for the treatment of von Hippel–Lindau disease-associated renal cell carcinoma.
Epidemiology:
VHL disease has an incidence of one in 36,000 births. There is over 90% penetrance by the age of 65. Age at diagnosis varies from infancy to age 60–70 years, with an average patient age at clinical diagnosis of 26 years.
History:
The German ophthalmologist Eugen von Hippel first described angiomas in the eye in 1904. Arvid Lindau described the angiomas of the cerebellum and spine in 1927. The term Von Hippel–Lindau disease was first used in 1936; however, its use became common only in the 1970s.
Notable cases:
Some descendants of the McCoy family (involved in the Hatfield-McCoy feud of Appalachia, USA) are presumed to have VHL. In an article appearing in the Associated Press, it has been speculated by a Vanderbilt University endocrinologist that the hostility underlying the Hatfield–McCoy feud may have been partly due to the consequences of Von Hippel–Lindau disease. The article suggests that the McCoy family was predisposed to bad tempers because many of them had a pheochromocytoma that produced excess adrenaline and a tendency toward explosive tempers. An update of the Associated Press article in 2023 carries more details.
Nomenclature:
Other uncommon names are: angiomatosis retinae, familial cerebello-retinal angiomatosis, cerebelloretinal hemangioblastomatosis, Hippel Disease, Hippel–Lindau syndrome, HLS, VHL, Lindau disease or retinocerebellar angiomatosis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Gypsum Springs Formation**
Gypsum Springs Formation:
The Gypsum Springs Formation is a stratigraphical unit of Middle Jurassic age in the Williston Basin.
It takes the name from Gypsum Springs in Wyoming, and was first described in outcrop in Freemont County by J.D. Love in 1939.
Lithology:
The Gypsum Springs Formation is composed of massive white gypsum in the lower part, and alternating gypsum, red shale, dolomite and limestone.
Distribution:
The Gypsum Springs Formation reaches a maximum thickness of 76 metres (250 ft) in central Wyoming. It occurs from the Black Hills in South Dakota through Wyoming and into southern Saskatchewan.
Relationship to other units:
It is equivalent to the upper part of the Watrous Formation and the lower part of the Gravelbourg Formation in Saskatchewan. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**IBM SecureWay Directory**
IBM SecureWay Directory:
IBM SecureWay Directory was the first directory server offering from IBM. Its latest release is called IBM Tivoli Directory Server. IBM Secureway Directory wasn't changed until the Release 5.1 was then known as IBM Directory Server. In the next release of the product, I.E. Release 5.2, the name was again changed to include the IBM Tivoli Framework, and is known as IBM Tivoli Directory Server. The latest release offered (as of July 2007) is ITDS 6.1. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Grounding in communication**
Grounding in communication:
Grounding in communication is a concept proposed by Herbert H. Clark and Susan E. Brennan. It comprises the collection of "mutual knowledge, mutual beliefs, and mutual assumptions" that is essential for communication between two people. Successful grounding in communication requires parties "to coordinate both the content and process". The concept is also common in philosophy of language.
Elements of theory:
Grounding in conversation Grounding in communication theory has described conversation as a form of collaborative action. While grounding in communication theory has been applied to mediated communication, the theory primarily addresses face-to-face conversation. Groups working together will ground their conversations by coming up with common ground or mutual knowledge. The members will utilize this knowledge in order to contribute to a more efficient dialogue. Grounding criterion is the mutual belief between conversational partners that everyone involved has a clear enough understanding of the concept to move forward.Clark and Schaefer (1989) found that, to reach this state of grounding criterion, groups use three methods of reaching an understanding that they can move forward.
Elements of theory:
New contribution: A partner moves forward with a new idea, and waits to see if their partner expresses confusion.
Assertion of acceptance: The partner receiving the information asserts that he understands by smiling, nodding, or verbally confirming the other partner. They may also assert their understanding by remaining silent.
Request for clarification: The partner receiving the information asks for clarification.
Elements of theory:
Phases in grounding The parties engaging in grounding exchange information over what they do or do not understand over the course of a communication and they will continue to clarify concepts until they have agreed on grounding criterion. There are generally two phases in grounding. Presenting utterance – speaker presents utterance to addressee Accepting utterance – addressee accepts utterance by providing evidence of understandingAccording to this theory, mere utterance qualifies as presentation in conversation whereas contribution to conversation demands both utterance and assurance of mutual understanding.The presentation phase can become complex when meanings are embedded or repairs are made to utterances. An example of a repair is "Do you and your husband have a car," but rather the messier, "now, – um do you and your husband have a j-car".The acceptance phase often clarifies any ambiguities with grounding. For example: Presentation phase: Alan: Now, – um do you and your husband have a j-car Acceptance phase: Barbara: – have a car? Alan: Yeah Barbara: NoThe acceptance phase is completed once Barbara indicates that the answer is "no" and Alan accepts it as a valid answer.
Elements of theory:
Evidence in conversation Grounding theory identifies three common types of evidence in conversation: 'acknowledgements, relevant next turn, and continued attention.Acknowledgements refer to back channel modes of communication that affirm and validate the messages being communicated. Some examples of these include, "uh huh," "yeah," "really," and head nods that act as continuers. They are used to signal that a phrase has been understood and that the conversation can move on.
Elements of theory:
Relevant next turn refers to the initiation or invitation to respond between speakers, including verbal and nonverbal prompts for turn-taking in conversation. Questions and answers act as adjacency pairs, the first part of the conversation is relevant to the second part. Meaning that a relevant utterance needs to be made in response to the question in order for it to be accepted. For example: Miss Dimple: Where can I get a hold of you? Chico: I don't know, lady. You see, I'm very ticklish.
Elements of theory:
Miss Dimple: I mean, where do you live? Chico: I live with my brother.Chico is revealing that he did not understand Miss Dimple's first question. She then corrects her phrase after realizing Chico's utterance wasn't an appropriate response and they continue to communicate with adjacent pairs.
Elements of theory:
Continued attention is the "mutual belief that addressees have correctly identified a referent." Partners involved in a conversation usually demonstrate this through eye gaze. One can capture their partner's gaze and attention by beginning an utterance. Attention that is undisturbed and not interrupted is an example of positive evidence of understanding. However, if one of the partners turns away or looks puzzled, these are signs that indicate there is no longer continued attention.
Elements of theory:
More evidence for grounding comes from a study done in 2014, in which dialogue between humans and robots was studied. The complexity of human-robot dialogue arises from the difference between the human's idea of what the robot has internalized versus the robot's actual internal representation of the real world. By going through the grounding process, this study concluded that human-robot grounding can be strengthened by the robot providing information to its partner about how it has internalized the information it has received.
Elements of theory:
Anticipation of what a partner knows There are three main factors that allow speakers to anticipate what a partner knows.
Community co-membership: Members of a group with knowledge in a particular field could use technical jargon when communicating within the group, whereas communicating outside of the group would require them to use layman terms.
Linguistic co-presence: A party in a conversation can use a pronoun to refer to someone previously mentioned in the conversation.
Elements of theory:
Physical co-presence: If the other parties are also present physically, one could point to an object within their physical environment.Shared visual information also aids anticipation of what a partner knows. For example, when responding to an instruction, performing the correct action without any verbal communication provides an indication of understanding, while performing the wrong action, or even failing to act, can signal misunderstanding. Findings from the paper (Using Visual Information for Grounding and Awareness in Collaborative Tasks), supports previous experiments and show evidence that collaborative pairs perform quicker and more accurately when they share a common view of a workspace. The results from the experiment showed that the pairs completed the task 30–40% faster when they were given shared visual information. The value of this information, however, depended on the features of the task. Its value increased when the task objects were linguistically complex and not part of the pairs‟ shared lexicon. However, even a small delay to the transmission of the visual information severely disrupted its value. Also, the ones accepting the instructions were seen to increase their spoken contribution when those giving the instructions do not have shared visual information. This increase in activity is due to the fact that it is easier for the former to produce the information rather than for the ones giving the instruction to continuously ask questions to anticipate their partners' understanding. Such a phenomenon is predicted by the grounding theory, where it is said that since communication costs are distributed among the partners, the result should shift to the method that would be the most efficient for the pair.
Elements of theory:
Least collaborative effort The theory of least collaborative effort asserts that participants in a contribution try to minimize the total effort spent on that contribution – in both the presentation and acceptance phases. In exact, every participant in a conversation tries to minimize the total effort spent in that interactional encounter. The ideal utterances are informative and brief.Participants in conversation refashion referring expressions and decrease conversation length. When interactants are trying to pick out difficult to describe shapes from a set of similar items, they produce and agree on an expression which is understood and accepted by both and this process is termed refashioning. The following is an example from Clark & Wilkes-Gibbs, A: Um, third one is the guy reading with, holding his book to the left B: Okay, kind of standing up? A: Yeah.
Elements of theory:
B: Okay.A offers a conceptualisation which is refashioned slightly by the B before it is agreed on by both. In later repetitions of the task, the expression employed to re-use the agreed conceptualisation progressively became shorter. For example, "the next one looks like a person who's ice skating, except they're sticking out two arms in front" (trial 1) was gradually shortened to "The next one's the ice skater" (trial 4) and eventually became just "The ice skater" in trial 6.Clark & Wilkes-Gibbs argue that there are two indicators of least collaborative effort in the above example. First, the process of refashioning itself involves less work than A having to produce a 'perfect' referring expression first time, because of the degree of effort which would be needed to achieve that. Second, the decrease in length of the referring expressions and the concomitant reduction in conversation length over the trials showed that the participants were exploiting their increased common ground to decrease the amount of talk needed, and thus their collaborative effort.
Elements of theory:
Time pressures: Parties will select more effortful means of communication when mutual understanding must occur within a fixed amount of time.
Errors: Parties will select more effortful means of communication when the chance for error is high or previous low effort communications have resulted in error.
Ignorance: Parties will engage in more effortful communication when a lack of shared knowledge is notable.Time pressures, errors, and ignorance are problems that are best remedied by mutual understanding, thus the theory of grounding in communication dispels the theory of least collaborative effort in instances where grounding is the solution to a communication problem.
Elements of theory:
Costs to grounding change The lack of one of these characteristics generally forces participants to use alternative grounding techniques, because the costs associated with grounding change. There is often a trade-off between the costs- one cost will increase as another decreases. There is also often a correlation between the costs. The following table highlights several of the costs that can change as the medium of communication changes.
Grounding in machine-mediated communication:
Choice of medium Clark and Brennan's theory acknowledges the impact of medium choice on successful grounding. According to the theory, computer mediated communication presents potential barriers to establishing mutual understanding. Grounding occurs by acknowledgement of understanding through verbal, nonverbal, formal, and informal acknowledgments, thus computer mediated communications reduce the number of channels through which parties can establish grounding.
Media constraints on grounding Clark and Brennan identify eight constraints mediated communication places on communicating parties.
Copresence: Otherwise known as colocation. Group members are in the same physical location. If group members are not able to share the same physical environment, they cannot use the ability to see and hear and interact with what their partner is interacting with, thus slowing down the grounding process.
Visibility: Group members can see each other. Though video-conferencing allows groups to see each other's faces, it does not allow groups to see what each other are doing like copresence does.
Grounding in machine-mediated communication:
Audibility: Groups can hear each other speaking. When groups are face-to-face they can take into account intonation and timing when coming to understandings or making decisions. Textual media like email and instant messages removes both of these aspects, and voice messages lack the timing aspect, thus making it difficult for the rest of the group to respond in a timely manner.
Grounding in machine-mediated communication:
Contemporality: Group members are receiving information as it is produced by other group members. If a message is only received by one partner after a delay, their reaction to the message is also delayed. This damages efficiency since the partner may either mistakenly move forward in the wrong way or not be able to move on at all until they receive the message.
Grounding in machine-mediated communication:
Simultaneity: Group members are receiving and producing information at the same time. In copresent groups, members can help the group come to grounding criterion by reacting when other members are speaking. For example, a member would make a statement and another would smile and nod while he spoke, thereby showing that an understanding has been made.
Sequentiality: Group members are receiving information in a consecutive sequence; one piece of a task at a time. In distributed groups messages are often few and far between. A member could receive a message via email and then might review several other messages before returning to the original task.
Reviewability: Group members can review information they previously received from other members. In face to face conversation group members might forget the details of what a teammate said, but when using technology like instant-messaging they can save and review what was said in a conversation at a later date.
Revisability: Group members can review their own messages before imparting information to their fellow group members. Using technology like email or instant messaging, group members can revise information to make it more clear before sending it to their fellow group members.
Related concepts:
Situation awareness Situation awareness theory holds that visual information helps pairs assess the current state of the task and plan future actions. An example would be when a friend is solving a problem that you know the solution to, you could intervene and provide hints or instructions when you see that your friend is stuck and needs help. Similarly, the grounding theory maintains that visual information can support the conversations through evidence of common ground or mutual understanding. Using the same example, you could provide clearer instruction to the problem when you see that your friend is stuck. Therefore, an extension to both theories would mean that when groups have timely visual information, they would be able to monitor the situation and clarify instructions more efficiently.
Related concepts:
Common ground (communication technique) Common ground is a communication technique based on mutual knowledge as well as awareness of mutual knowledge. According to Barr, common ground and common knowledge are kinds of mutual knowledge. Common ground is negotiated to close the gap between differences in perspective and this in turn would enable different perspectives and knowledge to be shared. Psycholinguist Herbert H. Clark uses the example of a day at the beach with his son. They share their experiences at the beach and are aware of the mutual knowledge. If one were to propose the painting of a room a certain shade of pink, they could describe it by comparing it to a conch shell they saw at the beach. They can make the comparison because of their mutual knowledge of the pink on the shell as well as awareness of the mutual knowledge of the pink. This communication technique is often found in negotiation.
Historical examples:
Common ground in communication has been critical in mitigating misunderstandings and negotiations.
Historical examples:
For example, common ground can be seen during the first Moon landing between Apollo 11 and mission control since mission control had to provide assistance and instructions to the crew in Apollo 11, and the crew had to be able to provide their situation and context for mission control. That was particularly difficult given the strict conditions in which the radio system needed to function. The success of the mission was dependent on the ability to provide situation information and instructions clearly. The transcripts show how often both parties checked to ensure that the other party had clearly heard what they had to say. Both parties needed to provide verbal feedback after they had listened because of the constraints of their situation.
Consequences of a lack of common ground:
Actor-observer effect The difficulties of establishing common ground, especially in using telecommunications technology, can give rise to dispositional rather than situational attribution. This tendency is known as the "actor-observer effect". What this means is that people often attribute their own behavior to situational causes, while observers attribute the actor's behavior to the personality or disposition of the actor. For example, an actor's common reason to be late is due to the situational reason, traffic. Observers' lack of contextual knowledge about the traffic, i.e. common ground, leads to them attributing the lateness due to ignorance or laziness on the actor's part. This tendency towards dispositional attribution is especially magnified when the stakes are higher and the situation is more complex. When observers are relatively calm, the tendency towards dispositional attribution is less strong.
Consequences of a lack of common ground:
Disappointment Another consequence of a lack of mutual understanding is disappointment. When communicating partners fail to highlight the important points of their message to their partner or know the important points of the partner's message, then both parties can never satisfy their partner's expectations. This lack of common ground damages interpersonal trust, especially when partners do not have the contextual information of why the other party behaves the way they did.
Consequences of a lack of common ground:
Multiple ignorances People base their decisions and contribution based on their own point of view. When there is a lack of common ground in the points of views of individuals within a team, misunderstandings occur. Sometimes these misunderstandings remain undetected, which means that decisions would be made based on ignorant or misinformed point of views, which in turn lead to multiple ignorances. The team may not be able to find the right solution because it does not have a correct representation of the problem.
Criticisms:
Critiques of the approaches used to explore common ground suggest that the creation of a common set of mutual knowledge is an unobservable event which is hardly accessible to empirical research. It would require an omniscient point of view in order to look into the participants' heads. By modeling the common ground from one communication partner's perspective is a model used to overcome this ambiguity. Even so, it is difficult to distinguish between the concepts of grounding and situation awareness.Distinguishing between situation awareness and grounding in communication can provide insights about how these concepts affect collaboration and furthering research in this area. Despite revealing evidence of how these theories exist independently, recognizing these concepts in conversation can prove to be difficult. Often both of these mechanisms are present in the same task. For example, in a study where Helpers had a small field of view and were able to see pieces being manipulated demonstrates grounding in communication. However, situation awareness is also present because there is no shared view of the pieces.Another criticism of common ground is the inaccurate connotation of the term. The name appears to relate to a specific place where a record of things can be stored. However, it does not account for how those involved in conversation effortlessly understand the communication. There have been suggestions that the term common ground be revised to better reflect how people actually come to understand each other.Grounding in communication has also been described as a mechanistic style of dialogue which can be used to make many predictions about basic language processing. Pickering and Garrod conducted many studies that reveal, when engaging in dialogue, production and comprehension become closely related. This process greatly simplifies language processing in communication. In Pickering and Garrod's paper Toward a Mechanistic Psychology of Dialogue, they discuss three points that exemplify the mechanistic quality of language processing: 1. By supporting a straightforward interactive inference mechanism 2. By enabling interlocutors to develop and use routine expressions 3. By supporting a system for monitoring language processingAnother component that is essential to this criticism on Grounding in Communication is that successful dialogue is coupled with how well those engaged in the conversation adapt to different linguistic levels. This process allows for the development of communication routines that allow for the process of comprehending language to be more efficient. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CER-203**
CER-203:
CER (Serbian: Цифарски Електронски Рачунар – Digital Electronic Computer) model 203 is an early digital computer developed by Mihajlo Pupin Institute (Serbia) in 1971. It was designed to process data of medium-sized businesses: In banks, for managing and processing of accounts, bookkeeping, foreign-currency and interest calculations, amortization plans and statistics In manufacturing, for production planning and management, market data processing and forecasting, inventory management, financial document management and process modelling In utilities, to calculate water and electricity consumption, to produce various reports and lists and for technical calculations and design In construction industry for network planning method design, financial management and bookkeeping In trading companies for payment processing, market analysis, inventory management and customer and partner relationship management
Specifications:
Central Processing: Type: BMS-203 Number of instructions: 32 Performance: one 16-cycle instruction: 20 μs one single cycle instruction: 5 μs addition and/or subtraction of two 15-digit numbers: 20 μsPrimary memory: Capacity: 8 kilowords Speed (cycle time): 1 μs Complete, autonomous memory error checking Parity controlPunched tape reader: Dielectric-based reading Speed: 500 to 1,000 characters per second Accepts 5, 7 and 8-channel tapesTape puncher: Speed: 75 characters per secondParallel Line Printer 667: "On the fly" printing 128 characters per line Removable/replaceable printing cylinder Speed: 500 lines per minute for a character set of 63 characters 550 lines per minute for a character set of 50 characters Automatic paper feeder Two line spacing settings Programamtic tape for discontinuous paper movement Maximum number of carbon copies: 6Independent Printer M 30: 132 characters per line Speed: Prints 25 alphanumeric characters per second Prints 33 numeric characters per second Tabulation speed: 144 characters per second Blank printing speed: 100 characters per second Maximum number of carbon copies: 6Magnetic cassettes 4096: Capacity: 600,000 characters Variable record length Transfer rate: 857 characters per second Tape speed: 10 inches per secondMagnetic Tape Drives: Data format: 9-track ASCII with inter-record space of 0.6 inches (1.52 cm) Data density: 556/800 bits per inch Capacity per tape: c. 10,000,000 characters Tape speed: 24 inches/s, 150 inches/s fast-forward and rewind Transfer rate: 19.2 kHz Tape width: 1⁄2 inch (1.27 cm) Tape length: 2,400 ft (731.52 m) Working ambient temperature range: 5 °C to 40 °C Relative humidity: up to 80% Integrated circuit control logic Separate control panel for each drive Read/Write Capabilities: Read and Write forward Read forward Read reverse | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Taxis of Venezuela**
Taxis of Venezuela:
Taxicabs of Venezuela is a form of public transport in Venezuela. At difference with most taxicab services in the world, in Venezuela there is not taximeter, nor any other form of measure the fare. The way it is measure is by a 'Carrera' which varies between driver. Due to this way of charging, it is a custom to ask and often negotiate the fare before getting inside the taxicab.
Taxis of Venezuela:
In Venezuela, there are three kinds of taxicabs: Whites, which are the most common kind found around big cities; they are commonly compact and subcompact cars, blacks which most of the time only serve in luxury hotels and some airports; this usually are SUVs and large size cars, and old multicolored which are mostly common in small towns and have cheaper fares.
History:
Before the mid-1990s, the taxis in Venezuela used to be old cars from the 1970s, 1960s, and sometimes even the 1950s with no uniform color or pattern. The only way to differentiate a taxi from a common car was by the yellow plates and small plastic signs on the top of the taxi. This with the exception of executives taxis (usually found on luxury hotels) that were plain black and with tinted windows.
History:
In 1992, the Caracas government tried to persuade taxi owners to buy new, safer and more environmentally friendly cars by giving special credit incentives and offering a standardized taxi car. The first taxi fleet to be offered was the Fiat Premio in white with a yellow, black and blue checker sticker that would cover 1/3 of the doors in the middle. Those units were a complete failure because taxi drivers in the country were accustomed to drive heavy duty American cars from the 1970s and early 1980s like the Chevrolet Malibu, Dodge Dart and Ford Maverick, destroying most of the units in just a couple of years.
History:
In 1997, the government tried again successfully the program of renewing the taxi fleet in Caracas. This time the fleet was made of a better powered Fiat Tempra specially assembled for taxi duties. These taxis were white with a vinyl checker yellow and black stripe on the doors. Also the congress passed a law allowing taxis to be sold without sales tax with the condition that they would keep the new color standard for taxis. By passing this law, other manufacturers created their own versions to compete in the emerging taxi market. Also a new fad was created, making most people feel safer driving the newer uniform cars than the old multicolor ones. Little by little the white taxis domain the country making the old ones something more rare each day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kirlian photography**
Kirlian photography:
Kirlian photography is a collection of photographic techniques used to capture the phenomenon of electrical coronal discharges. It is named after Semyon Kirlian, who, in 1939, accidentally discovered that if an object on a photographic plate is connected to a high-voltage source, an image is produced on the photographic plate.
The technique has been variously known as "electrography", "electrophotography", "corona discharge photography" (CDP), "bioelectrography", "gas discharge visualization (GDV)", "electrophotonic imaging (EPI)", and, in Russian literature, "Kirlianography".
Kirlian photography has been the subject of scientific research, parapsychology research, and art. Paranormal claims have been made about Kirlian photography, but these claims are rejected by the scientific community. To a large extent, it has been used in alternative medicine research.
History:
In 1889, Czech Bartoloměj Navrátil coined the word "electrography". Seven years later in 1896, a French experimenter, Hippolyte Baraduc, created electrographs of hands and leaves.
In 1898, Polish-Belarusian engineer Jakub Jodko-Narkiewicz demonstrated electrography at the fifth exhibition of the Russian Technical Society.
History:
In 1939, two Czechs, S. Pratt and J. Schlemmer published photographs showing a glow around leaves. The same year, Russian electrical engineer Semyon Kirlian and his wife Valentina developed Kirlian photography after observing a patient in Krasnodar Hospital who was receiving medical treatment from a high-frequency electrical generator. They had noticed that when the electrodes were brought near the patient's skin, there was a glow similar to that of a neon discharge tube.The Kirlians conducted experiments in which photographic film was placed on top of a conducting plate, and another conductor was attached to a hand, a leaf or other plant material. The conductors were energized by a high-frequency high-voltage power source, producing photographic images typically showing a silhouette of the object surrounded by an aura of light.
History:
In 1958, the Kirlians reported the results of their experiments for the first time. Their work was virtually unknown until 1970, when two Americans, Lynn Schroeder and Sheila Ostrander, published a book, Psychic Discoveries Behind the Iron Curtain. High-voltage electrophotography soon became known to the general public as Kirlian photography. Although little interest was generated among western scientists, Russians held a conference on the subject in 1972 at Kazakh State University.Kirlian photography was used in the former Eastern Bloc in the 1970s. The corona discharge glow at the surface of an object subjected to a high-voltage electrical field was referred to as a "Kirlian aura" in Russia and Eastern Europe. In 1975, soviet scientist Victor Adamenko wrote a dissertation titled Research of the structure of High-frequency electric discharge (Kirlian effect) images. Scientific study of what the researchers called the Kirlian effect was conducted by Victor Inyushin at Kazakh State University.Early in the 1970s, Thelma Moss and Kendall Johnson at the Center for Health Sciences at the UCLA conducted extensive research into Kirlian photography. Moss led an independent and unsupported parapsychology laboratory that was shut down by the university in 1979.
Overview:
Kirlian photography is a technique for creating contact print photographs using high voltage. The process entails placing sheet photographic film on top of a metal discharge plate. The object to be photographed is then placed directly on top of the film. High voltage current is momentarily applied to the object, thus creating an exposure. The corona discharge between the object and the plate due to high-voltage is captured by the film. The developed film results in a Kirlian photograph of the object.
Overview:
Color photographic film is calibrated to produce faithful colors when exposed to normal light. Corona discharges can interact with minute variations in the different layers of dye used in the film, resulting in a wide variety of colors depending on the local intensity of the discharge. Film and digital imaging techniques also record light produced by photons emitted during corona discharge (see Mechanism of corona discharge).
Overview:
Photographs of inanimate objects such as a coins, keys and leaves can be made more effectively by grounding the object to the earth, a cold water pipe or to the opposite (polarity) side of the high-voltage source. Grounding the object creates a stronger corona discharge.Kirlian photography does not require the use of a camera or a lens because it is a contact print process. It is possible to use a transparent electrode in place of the high-voltage discharge plate, for capturing the resulting corona discharge with a standard photo or video camera.Visual artists such as Robert Buelteman, Ted Hiebert, and Dick Lane have used Kirlian photography to produce artistic images of a variety of subjects.
Research:
Kirlian photography has been a subject of scientific research, parapsychology research and pseudoscientific claims.
Scientific research Results of scientific experiments published in 1976 involving Kirlian photography of living tissue (human finger tips) showed that most of the variations in corona discharge streamer length, density, curvature, and color can be accounted for by the moisture content on the surface of and within the living tissue.
Research:
Konstantin Korotkov developed a technique similar to Kirlian photography called "gas discharge visualization" (GDV). Korotkov's GDV camera system consists of hardware and software to directly record, process and interpret GDV images with a computer. Korotkov promotes the device and research in a medical context. Izabela Ciesielska at the Institute of Architecture of Textiles in Poland used Korotkov's GDV camera to evaluate the effects of human contact with various textiles on biological factors such as heart rate and blood pressure, as well as corona discharge images. The experiments captured corona discharge images of subjects' fingertips while the subjects wore sleeves of various natural and synthetic materials on their forearms. The results failed to establish a relationship between human contact with the textiles and the corona discharge images and were considered inconclusive.
Research:
Parapsychology research In 1968, Thelma Moss, a psychology professor, headed the UCLA Neuropsychiatric Institute (NPI), which was later renamed the Semel Institute. The NPI had a laboratory dedicated to parapsychology research and staffed mostly with volunteers. The lab was unfunded, unsanctioned and eventually shut down by the university. Toward the end of her tenure at UCLA, Moss became interested in Kirlian photography, a technique that supposedly measured the "auras" of a living being. According to Kerry Gaynor, one of her former research assistants, "many felt Kirlian photography's effects were just a natural occurrence."Paranormal claims of Kirlian photography have not been observed or replicated in experiments by the scientific community. The physiologist Gordon Stein has written that Kirlian photography is a hoax that has "nothing to do with health, vitality, or mood of a subject photographed." Claims Kirlian believed that images created by Kirlian photography might depict a conjectural energy field, or aura, thought, by some, to surround living things. Kirlian and his wife were convinced that their images showed a life force or energy field that reflected the physical and emotional states of their living subjects. They thought that these images could be used to diagnose illnesses. In 1961, they published their first article on the subject in the Russian Journal of Scientific and Applied Photography. Kirlian's claims were embraced by energy treatments practitioners.
Research:
Torn leaf experiment A typical demonstration used as evidence for the existence of these energy fields involved taking Kirlian photographs of a picked leaf at set intervals. The gradual withering of the leaf was thought to correspond with a decline in the strength of the aura. In some experiments, if a section of a leaf was torn away after the first photograph, a faint image of the missing section sometimes remains when a second photograph was taken. However, if the imaging surface is cleaned of contaminants and residual moisture before the second image is taken, then no image of the missing section will appear.The living aura theory is at least partially repudiated by demonstrating that leaf moisture content has a pronounced effect on the electric discharge coronas; more moisture creates larger corona discharges. As the leaf dehydrates, the coronas will naturally decrease in variability and intensity. As a result, the changing water content of the leaf can affect the so-called Kirlian aura. Kirlian's experiments did not provide evidence for an energy field other than the electric fields produced by chemical processes and the streaming process of coronal discharges.The coronal discharges identified as Kirlian auras are the result of stochastic electric ionization processes and are greatly affected by many factors, including the voltage and frequency of the stimulus, the pressure with which a person or object touches the imaging surface, the local humidity around the object being imaged, how well grounded the person or object is, and other local factors affecting the conductivity of the person or object being imaged. Oils, sweat, bacteria, and other ionizing contaminants found on living tissues can also affect the resulting images.
Research:
Qi Scientists such as Beverly Rubik have explored the idea of a human biofield using Kirlian photography research, attempting to explain the Chinese discipline of Qigong. Qigong teaches that there is a vitalistic energy called qi (or chi) that permeates all living things.
Rubik's experiments relied on Konstantin Korotkov's GDV device to produce images, which were thought to visualize these qi biofields in chronically ill patients. Rubik acknowledges that the small sample size in her experiments "was too small to permit a meaningful statistical analysis". Claims that these energies can be captured by special photographic equipment are criticized by skeptics.
In popular culture:
Kirlian photography has appeared as a fictional element in numerous books, films, television series, and media productions, including the 1975 film The Kirlian Force, re-released under the more sensational title Psychic Killer. Kirlian photographs have been used as visual components in various media, such as the sleeve of George Harrison's 1973 album Living in the Material World, which features Kirlian photographs of his hand holding a Hindu medallion on the front sleeve and American coins on the back, shot at Thelma Moss's UCLA parapsychology laboratory.The artwork of David Bowie's 1997 album Earthling has reproductions of Kirlian photographs taken by Bowie. The photographs, which show a crucifix Bowie wore around his neck and the imprint of his "forefinger" tip, date to April 1975 when Bowie was living in Los Angeles and fascinated with the paranormal. The photographs were taken before consuming cocaine and 30 minutes afterwards. The after photograph apparently shows a substantial increase in the "aura" around the crucifix and forefinger.
In popular culture:
The Cluster novels by science fiction author Piers Anthony uses the concept of the Kirlian Aura as a way to transfer a person's personality into another body, even an alien body, across light years. The book The Anarchistic Colossus (1977) by A.E.van Vogt involves an anarchistic society controlled by ‘Kirlian computers’.
In popular culture:
The opening credits during the first seven seasons of the television series The X-Files shows a Kirlian image of a left human hand. The image appears as the 11th clip in the introductory video montage and is formed by a bluish coronal discharge as the primary outline, with only the proximal phalange of the index finger shown cryptically in red. A human silhouette, in white, seemingly falls towards the hand. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lotka's law**
Lotka's law:
Lotka's law, named after Alfred J. Lotka, is one of a variety of special applications of Zipf's law. It describes the frequency of publication by authors in any given field. Let X be the number of publications, Y be the number of authors with X publications, and k be a constants depending on the specific field. Lotka's law states that Y∝X−k In Lotka's original publication, he claimed k=2 . Subsequent research showed that k varies depending on the discipline.
Lotka's law:
Equivalently, Lotka's law can be stated as Y′∝X−(k−1) , where Y′ is the number of authors with at least X publications. Their equivalence can be proved by taking the derivative.
Example:
Assume that n=2 in a discipline, then as the number of articles published increases, authors producing that many publications become less frequent. There are 1/4 as many authors publishing two articles within a specified time period as there are single-publication authors, 1/9 as many publishing three articles, 1/16 as many publishing four articles, etc. And if 100 authors wrote exactly one article each over a specific period in the discipline, then: That would be a total of 294 articles and 155 writers, with an average of 1.9 articles for each writer.
Software:
Friedman, A. 2015. "The Power of Lotka’s Law Through the Eyes of R" The Romanian Statistical Review. Published by National Institute of Statistics. ISSN 1018-046X B Rousseau and R Rousseau (2000). "LOTKA: A program to fit a power law distribution to observed frequency data". Cybermetrics. 4. ISSN 1137-5019. - Software to fit a Lotka power law distribution to observed frequency data. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Computer programming**
Computer programming:
Computer programming is the process of performing particular computations (or more generally, accomplishing specific computing results), usually by designing and building executable computer programs. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms (usually in a particular programming language, commonly referred to as coding). The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. To produce machine code, the source code must either be compiled or transpiled. Compiling takes the source code from a low-level programming language and converts it into machine code. Transpiling on the other hand, takes the source-code from a high-level programming language and converts it into bytecode. This is interpreted into machine code. The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus usually requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
Computer programming:
Tasks accompanying and related to programming include testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. However, while these might be considered part of the programming process, often the term software development is more likely used for this larger overall process – whereas the terms programming, implementation, and coding tend to be focused on the actual writing of code. Relatedly, software engineering combines engineering techniques and principles with software development. Also, those involved with software development may at times engage in reverse engineering, which is the practice of seeking to understand an existing program so as to re-implement its function in some way.
History:
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
History:
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage had already written his first program for the Analytical Engine in 1837.
History:
In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
History:
Machine language Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
History:
Compiler languages High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
History:
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Modern programming:
Quality requirements Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important: Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
Modern programming:
Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
Modern programming:
Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.
Modern programming:
Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
Modern programming:
Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
Modern programming:
Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.
Modern programming:
Readability of source code In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Modern programming:
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include: Different indent styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, functions, procedures, etc.)The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Modern programming:
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
Modern programming:
Algorithmic complexity The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
Modern programming:
Methodologies The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Modern programming:
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
Modern programming:
Measuring language usage It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Modern programming:
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Modern programming:
Debugging Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
Modern programming:
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.
Modern programming:
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Programming languages:
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Programming languages:
Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes: The details look different in different languages, but a few basic instructions appear in just about every language: Input: Gather data from the keyboard, a file, or some other device.
Output: Display data on the screen or send data to a file or other device.
Arithmetic: Perform basic arithmetical operations like addition and multiplication.
Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
Repetition: Perform some action repeatedly, usually with some variation.Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Programmers:
Computer programmers are those who write computer software. Their jobs usually involve: Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Batten**
Batten:
A batten is most commonly a strip of solid material, historically wood but can also be of plastic, metal, or fiberglass. Battens are variously used in construction, sailing, and other fields.
In the lighting industry, battens refer to linear light fittings.
In the steel industry, battens used as furring may also be referred to as "top hats", in reference to the profile of the metal.
Roofing:
Roofing battens or battening, also called roofing lath, are used to provide the fixing point for roofing materials such as shingles or tiles. The spacing of the battens on the trusses or rafters depend on the type of roofing material and are applied horizontally like purlins.
Battens are also used in metal roofing to secure the sheets called a batten-seam roof and are covered with a batten roll joint.Some roofs may use a grid of battens in both directions, known as a counter-batten system, which improves ventilation.
Roofing battens are most commonly made of wood or metal, but can be made of other materials.
Wall battens:
Wall battens like roofing battens are used to fix siding materials such as tile or shingles. Rainscreen construction uses battens (furring) as part of a system which allows walls to dry out more quickly than normal.
Board-and-batten Board-and-batten siding is an exterior treatment of vertical boards with battens covering the seams. Board-and-batten roofing is a type of board roof with battens covering the gaps between boards on a roof as the roofing material. Board-and-batten is also a synonym for single-wall construction, a method of building with vertical, structural boards, the seams sometimes covered with battens.
Spacers:
Battens may be used as spacers, sometimes called furring, to raise the surface of a material. In flooring the sometimes large battens support the finish flooring in a similar manner to a joist but with the batten resting on a solid sub-floor as a floating floor and sometimes cushioned.
Trim:
Batten trim or batten molding is a thin strip of trim typically with a rectangular cross-section similar to lath used in lattice, used to cover seams between panels of exterior siding or interior paneling.
Flooring:
In flooring a batten may be relatively large, up to 2.5 inches (6.4 cm) thick by 7 inches (18 cm) wide and more than 6 feet (1.8 m) long.
Batten doors:
In door construction battens may be used to strengthen panels made up of multiple boards, as in a batten door, or to cover joins.
Wall insulation:
Battens are used for solid wall insulation. Regularly spaced battens are fitted to the wall, the spaces between them filled with insulation, and plasterboard or drywall screwed to the battens. This method is no longer the most popular, as rigid insulation sheets give better insulation (with battens bridging the insulation) and take less time to fit.
Screed batten:
In concrete work a screed batten is fixed to the formwork to smoothly guide a screed smoothing tool.
Lighting:
In the lighting industry, battens refer to linear fittings, commonly LED strips or using fluorescent tubes. Batten luminaires are typically cheap and meant to be fixed directly to structural battens in loft spaces or to ceilings and soffits in back-of-house areas where aesthetic value is not required. Fluorescent fittings may include a low-specification diffuser cover, or simply have the fluorescent tube exposed.
Sailing:
In sailing, battens are long, narrow and flexible inserts used in sails, to improve their qualities as airfoils. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Core Foundation**
Core Foundation:
Core Foundation (also called CF) is a C application programming interface (API) written by Apple for its operating systems, and is a mix of low-level routines and wrapper functions. Most Core Foundation routines follow a certain naming convention that deal with opaque objects, for example CFDictionaryRef for functions whose names begin with CFDictionary, and these objects are often reference counted (manually) through CFRetain and CFRelease. Internally, Core Foundation forms the base of the types in the Objective-C standard library and the Carbon API.The most prevalent use of Core Foundation is for passing its own primitive types for data, including raw bytes, Unicode strings, numbers, calendar dates, and UUIDs, as well as collections such as arrays, sets, and dictionaries, to numerous macOS C routines, primarily those that are GUI-related. At the operating system level Core Foundation also provides standardized application preferences management through CFPropertyList, bundle handling, run loops, interprocess communication through CFMachPort and CFNotificationCenter, and a basic graphical user interface message dialog through CFUserNotification.
Core Foundation:
Other parts of the API include utility routines and wrappers around existing APIs for ease of use. Utility routines perform such actions as file system and network I/O through CFReadStream, CFWriteStream, and CFURL and endianness translation (Byte Order Utilities). Some examples of wrapper routines include those for Core Foundation's wrapper routines for Unix sockets, the CFSocket API.
Core Foundation:
Some types in Core Foundation are "toll-free bridged", or interchangeable with a simple cast, with those of their Foundation Kit counterparts. For example, one could create a CFDictionaryRef Core Foundation type, and then later simply use a standard C cast to convert it to its Objective-C counterpart, NSDictionary *, and then use the desired Objective-C methods on that object as one normally would.
Core Foundation:
Core Foundation has a plug-in model (CFPlugin) that is based on the Microsoft Component Object Model.
Open source availability:
Apple used to release most of CF as an open-source project called CFLite that can be used to write cross-platform applications for macOS, Linux, and Windows.A third-party open-source implementation called OpenCFLite extends the Apple CFLite for building on 32-bit Windows and Linux environments. It is maintained by one of the WebKit developers, but was stalled by 2015. The karaoke platform KJams maintains a fork since 2017. This version, by its programmer David M. Cotter, supports 64-bit systems and has a CFNetwork implementation with LibreSSL-based TLS. A fork of OpenCFLite was created by Grant Erickson (an original collaborator with Brent Fulgham on the SourceForge version) in 2021 with a companion port of the CFHost portion of CFNetwork, as OpenCFNetwork.The Swift Corelib Foundation, a fallback version of the Foundation Kit for the Swift programming language for non-Apple platforms, contains a near-full version of the Core Foundation released under Apache License 2.0.GNUstep includes a version of the Core Foundation called "libs-corebase". | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Splanchnology**
Splanchnology:
Splanchnology is the study of the visceral organs, i.e. digestive, urinary, reproductive and respiratory systems.The term derives from the Neo-Latin splanchno-, from the Greek σπλάγχνα, meaning "viscera". More broadly, splanchnology includes all the components of the Neuro-Endo-Immune (NEI) Supersystem. An organ (or viscus) is a collection of tissues joined in a structural unit to serve a common function. In anatomy, a viscus is an internal organ, and viscera is the plural form. Organs consist of different tissues, one or more of which prevail and determine its specific structure and function. Functionally related organs often cooperate to form whole organ systems.
Splanchnology:
Viscera are the soft organs of the body. There are organs and systems of organs that differ in structure and development but they are united for the performance of a common function. Such functional collection of mixed organs, form an organ system. These organs are always made up of special cells that support its specific function. The normal position and function of each visceral organ must be known before the abnormal can be ascertained.
Splanchnology:
Healthy organs all work together cohesively and gaining a better understanding of how, helps to maintain a healthy lifestyle. Some functions cannot be accomplished only by one organ. That is why organs form complex systems. The system of organs is a collection of homogeneous organs, which have a common plan of structure, function, development, and they are connected to each other anatomically and communicate through the NEI supersystem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Nepcon**
Nepcon:
NEPCON is a trade event for the electronics manufacturing industry. It is held annually in several parts of the world. In the United States, for instance, the event called NEPCON West had a 37-year run and ended in 2002. This trade show has been described as the grandfather of all electronics manufacturing trade shows. The case is the same for NEPCON UK, which is considered Britain's largest annual electronics exhibition.Nepcon China is an annual Surface-mount technology (SMT) trade event in China that features a comprehensive range of SMT products and technology. The 18th edition of the event was held from 8 to 11 April 2008. The 2019 exhibit was scheduled at the Shanghai World Expo Center from April 24 to 26. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy**
Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy:
The Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy, referred to internationally as Pittcon, is a non-profit educational organization based in Pennsylvania that organizes an annual Conference and Exposition on laboratory science. It is sponsored by the Spectroscopy Society of Pittsburgh and the Society for Analytical Chemists of Pittsburgh. The Conference has traditionally been the most attended annual conference on analytical chemistry and applied spectroscopy in the world, with attendance of approximately 20,000 people in the period of 2005-2011. Pittcon presents several awards each year to individuals who have made outstanding contributions to the various fields in analytical chemistry. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Outflow channels**
Outflow channels:
Outflow channels are extremely long, wide swathes of scoured ground on Mars. They extend many hundreds of kilometers in length and are typically greater than one kilometer in width. They are thought to have been carved by huge outburst floods.
Outflow channels:
Crater counts indicate that most of the channels were cut since the early Hesperian, though the age of the features is variable between different regions of Mars. Some outflow channels in the Amazonis and Elysium Planitiae regions have yielded ages of only tens of millions of years, extremely young by the standards of Martian topographic features. The largest, Kasei Vallis, is around 3,500 km (2,200 mi) long, greater than 400 km (250 mi) wide and exceeds 2.5 km (1.6 mi) in depth cut into the surrounding plains.
Outflow channels:
The outflow channels contrast with the Martian channel features known as "valley networks", which much more closely resemble the dendritic planform more typical of terrestrial river drainage basins.
Outflow channels tend to be named after the names for Mars in various ancient world languages, or more rarely for major terrestrial rivers. The term outflow channels was introduced in planetology in 1975.
Formation:
On the basis of their geomorphology, locations and sources, the channels are today generally thought to have been carved by outburst floods (huge, rare, episodic floods of liquid water), although some authors have made the case for formation by the action of glaciers, lava, or debris flows. Calculations indicate that the volumes of water required to cut such channels at least equal and most likely exceed by several orders of magnitude the present discharges of the largest terrestrial rivers, and are probably comparable to the largest floods known to have ever occurred on Earth (e.g., those that cut the Channeled Scablands in North America or those released during the re-flooding of the Mediterranean basin at the end of the Messinian Salinity Crisis). Such exceptional flow rates and the implied associated volumes of water released could not be sourced by precipitation but rather demand the release of water from some long-term store, probably a subsurface aquifer sealed by ice and subsequently breached by meteorite impact or igneous activity.
List of outflow channels by region:
This is a partial list of named channel structures on Mars claimed as outflow channels in the literature, largely following The Surface of Mars by Carr. The channels tend to cluster in certain regions on the Martian surface, often associated with volcanic provinces, and the list reflects this. Originating structures at the head of the channels, if clear and named, are noted in parentheses and in italics after each entry.
List of outflow channels by region:
Circum-Chryse region Chryse Planitia is a roughly circular volcanic plain east of the Tharsis bulge and its associated volcanic systems. This region contains the most prominent and numerous outflow channels on Mars. The channels flow east or north into the plain.
List of outflow channels by region:
Ares Vallis (Aram Chaos; Iani Chaos) Kasei Vallis (Echus Chasma) Maja Valles (Juventae Chasma) Mawrth Vallis (no obvious source) Ravi Vallis (Aromatum Chaos) Shalbatana Vallis (chaos in Orson Welles crater; Ganges Chasma?) Simud Valles (Hydraotes Chaos; Aureum Chaos?; Arsinoes Chaos?) Tiu Valles (Aram Chaos?; Aureum Chaos?) Tharsis region In this region it is particularly difficult to distinguish outflow channels from lava channels but the following features have been suggested as at least overprinted by outflow channel floods: Parts of the Olympica Fossae Valleys adjacent to the southeast margin of Olympus Mons (nameless graben) Amazonis and Elysium Planitiae Several channels flow either onto the plains of Amazonis and Elysium from the southern highlands, or originate at graben within the plains. This region contains some of the youngest channels. Some of these channels have rare tributaries, and they do not start at a chaos region. It has been suggested the formation mechanisms for these channels may be more variable than for those around Chryse Planitia, perhaps in some cases involving lake breaches at the surface.
List of outflow channels by region:
Al-Qahira Vallis Athabasca Vallis (Cerberus Fossae) Grjota Vallis (nameless graben) Ma'adim Vallis (shallow depression in Highlands) Mangala Valles (Mangala Fossa) Marte Vallis (Cerberus Planitia) Utopia Planitia Several outflow channels rise in the region west of the Elysium volcanic province and flow northwestward to the Utopia Planitia. As common in the Amazonis and Elysium Planitiae regions, these channels tend to originate in graben. Some of these channels may be influenced by lahars, as indicated by their surface textures and ridged, lobate deposits at their margins and termini. The valleys of Hephaestus Fossae and Hebrus Valles are of extremely unusual form, and although sometimes claimed as outflow channels, are of enigmatic origin.
List of outflow channels by region:
Granicus Vallis (graben radial to Elysium Mons) Hrad Valles (graben radial to Elysium Mons) Tinjar Vallis (graben radial to Elysium Mons) Hebrus Valles (irregular depression; ends in discontinuous linear hollows) Hephaestus Fossae (irregular depression; flows through angular segments; ends in discontinuous linear hollows) Hellas region Three valleys flow from east of its rim down onto the floor of the Hellas basin.
List of outflow channels by region:
Dao Vallis (box canyon near Hadriaca Patera) Harmakhis Vallis (close to end of Reull Vallis) Niger Vallis (indistinct depressions near Hadriaca Patera) Argyre region It has been argued that Uzboi, Ladon, Margaritifer and Ares Valles, although now separated by large craters, once comprised a single outflow channel flowing north into Chryse Planitia. The source of this outflow has been suggested as overflow from the Argyre crater, formerly filled to the brim as a lake by channels (Surius, Dzigai, and Palacopus Valles) draining down from the south pole. If real, the full length of this drainage system would be over 8000 km, the longest known drainage path in the solar system. Under this suggestion, the extant form of the outflow channel Ares Vallis would thus be a remolding of a pre-existing structure.
List of outflow channels by region:
Polar regions The large troughs present in each pole, Chasma Boreale and Chasma Australe, have both been argued to have been formed by meltwater release from beneath polar ice, as in a terrestrial jökulhlaup. However, others have argued for an eolian origin, with them induced by katabatic winds blowing down from the poles. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sediment quality triad**
Sediment quality triad:
In aquatic toxicology, the sediment quality triad (SQT) approach has been used as an assessment tool to evaluate the extent of sediment degradation resulting from contaminants released due to human activity present in aquatic environments (Chapman, 1990). This evaluation focuses on three main components: 1.) sediment chemistry, 2.) sediment toxicity tests using aquatic organisms, and 3.) the field effects on the benthic organisms (Chapman, 1990). Often used in risk assessment, the combination of three lines of evidence can lead to a comprehensive understanding of the possible effects to the aquatic community (Chapman, 1997). Although the SQT approach does not provide a cause-and-effect relationship linking concentrations of individual chemicals to adverse biological effects, it does provide an assessment of sediment quality commonly used to explain sediment characteristics quantitatively. The information provided by each portion of the SQT is unique and complementary, and the combination of these portions is necessary because no single characteristic provides comprehensive information regarding a specific site (Chapman, 1997)
Components:
Sediment chemistry Sediment chemistry provides information on contamination, however it does not provide information of biological effects (Chapman, 1990). Sediment chemistry is used as a screening tool to determine the contaminants that are most likely to be destructive to organisms present in the benthic community at a specific site. During analysis, sediment chemistry data does not depend strictly on comparisons to sediment quality guidelines when utilizing the triad approach. Rather, sediment chemistry data, once collected for the specific site, is compared to the most relevant guide values, based on site characteristics, to assess which chemicals are of the greatest concern. This technique is used because no one set of data is adequate for all situations. This allows you to identify the chemicals of concern, which most frequently exceed effects-based guidelines. Once the chemical composition of the sediment is determined and the most concerning contaminants have been identified, toxicity tests are conducted to link environmental concentrations to potential adverse effects.
Components:
Sediment toxicity Sediment toxicity is evaluated based on bioassay analysis. Standard bioassay toxicity tests are utilized and are not organism restricted (Chapman, 1997). Differences in mechanisms of exposure and organism physiology must be taken into account when selecting your test organisms, and you must be able to adequately justify the use of that organism. These bioassay tests evaluate effects based on different toxicological endpoints. The toxicity tests are conducted with respect to the chemicals of concern at environmentally relevant concentrations identified by the sediment chemistry portion of the triad approach. Chapman (1990) lists typically used endpoints, which include lethal endpoints such as mortality, and sublethal endpoints such as growth, behavior, reproduction, cytotoxicity and optionally bioaccumulation. Often pilot studies are utilized to assist in the selection of the appropriate test organism and end points. Multiple endpoints are recommended and each of the selected endpoints must adequately complement each of the others (Chapman, 1997). Effects are evaluated using statistical methods that allow for the distinction between responses that are significantly different than negative controls. If sufficient data is generated, minimum significant differences (MSDs) are calculated using power analyses and applied to toxicity tests to determine the difference between statistical difference and ecological relevance.
Components:
The function of the toxicity portion of the triad approach is to allow you to estimate the effects in the field. While laboratory based experiments simplify a complex and dynamic environment, toxicity results allow the potential for field extrapolation. This creates a link of exposure and effect and allows the determination of an exposure-response relationship. When combined with the other two components of the Sediment Quality Triad it allows for a holistic understanding between cause and effect.
Components:
Field effects on benthic organisms The analysis of field effects on benthic organisms functions to assess the potential for community based effects resulting from the identified contaminants. This is done because benthic organisms are sessile and location specific, allowing them to be used as accurate markers of contaminant effect (Chapman, 1990). This is done through conducting field-based tests, which analyze changes in benthic community structures focusing on changes in number of species, abundance, and percentage of major taxonomic groups (Chapman, 1997). Changes in benthic communities are typically quantified using a principal component analysis and classification (Chapman, 1997). There is no one specifically defined method for conducting these field assessments, however the different multivariate analysis typically produces results identifying relationships between variables when a robust correlation exists.
Components:
Knowledge of the site-specific ecosystem and the ecological roles of dominant species within that ecosystem are critical to producing biological evidence of alteration in benthic community resultant of contaminant exposure. When possible, it is recommended to observe changes in community structure that directly relate to the test species used during the sediment toxicity portion of the triad approach in order to produce the most reliable evidence.
Components:
Bioaccumulation Bioaccumulation should be considered during the utilization of the triad approach depending on the study goals. It preparation for measuring bioaccumulation, it must be specified if the test will serve to assess secondary poisoning or biomagnification (Chapman, 1997). Bioaccumulation analysis should be conducted appropriately based on the contaminants of concern (for example, metals do not biomagnify). This can be done with field-collected, caged organisms, or laboratory exposed organisms (Chapman, 1997). While the bioaccumulation portion is recommended, it is not required. However, it serves an important role with the purpose of quantifying effects due to trophic transfer of contaminants through consumption of contaminated prey.
Components:
Pollution-induced degradation Site-specific pollution induced degradation is measured through the combination of the three portions of the sediment quality triad. The sediment chemistry, sediment toxicity, and the field effects to benthic organisms are compared quantitatively. Data is most useful when it has been normalized to reference site values by converting them to reference-to-ratio values (Chapman et al. 1986; Chapman 1989). The reference site is chosen to be the site with the least contamination with respect to the other sites sampled. Once normalized, data between portions of the triad are able to be compared even when large differences in measurements or units exits (Chapman, 1990). From the combination of the results from each portion of the triad a multivariate figure is developed and used to determine the level of degradation.
Methods and interpretation:
No single method can assess impact of contamination-induced degradation of sediment across aquatic communities. Methods of each component of the triad should be selected for efficacy and relevance in lab and field tests. Application of the SQT is typically location-specific and can be used to compare differences in sediment quality temporally or across regions (Chapman, 1997).
Multiple lines of evidence The SQT incorporates three lines of evidence (LOE) to provide direct assessment of sediment quality. The chemistry, toxicity, and benthic components of the triad each provide a LOE, which is then integrated into a Weight of evidence.
Methods and interpretation:
Criteria In order to qualify for SQT assessment chemistry, toxicity, and in situ measurements must be collected synoptically using standardized methods of sediment quality. A control sample is necessary to evaluate impact of contaminated sites. An appropriate reference is a whole sediment sample (particles and associated pore water) collected near area of concern and is representative of background conditions in the absence of contaminants. Evidence of contaminant exposure and biological effect is required in order to assign a site as chemically impacted.
Methods and interpretation:
Framework The chemistry component incorporates both bioavailability and potential effects on benthic community. The potential of sediment toxicity for a given site is based on a linear regression model (LRM). A chemical score index (CSI) of the contaminant describes the magnitude of exposure relative to benthic community disturbance. An optimal set of index-specific thresholds are selected for the chemistry component by statistically comparing several candidates to evaluate which set exhibited greatest overall agreement (Bay and Weisberg, 2012). The magnitude of sediment toxicity is determined by multiple toxicity tests conducted in the lab to complement chemistry component. Toxicity LOE are determined by the mean of toxicity category score from all relevant tests. Development of LOE for benthic component is based on community metrics and abundance. Several indices such as benthic response index (BRI), benthic biotic integrity (IBI), and relative biotic index (RBI) are utilized to assess biological response of the benthic community. The median score of all individual indices will establish benthic LOE.
Methods and interpretation:
Each component of the triad is assigned a response category: minimal, low, moderate, or high disturbance relative to background conditions. Individual LOEs are ranked into categories by comparing test results of each component to established thresholds (Bay and Weisberg, 2012). Integration of benthos and toxicity LOE classify the severity and effects of contamination. LOE of chemistry and toxicity are combined to assign the potential of chemically-mediated effects. A site is assigned an impact category by integrating the severity of effect and the potential of chemically mediated effects. The conditions of individual sites of concern are assigned an impact category between 1 and 5 (with 1 being unimpacted and 5 being clearly impacted by contamination). The SQT triad can also classify impact as inconclusive in cases when LOE between components are in disagreement or additional information is required (Bay and Weisberg, 2012).
Methods and interpretation:
Triaxial graphs SQT measurements are scaled proportionately by relative impact and visually represented on triaxial graphs. Evaluation of sediment integrity and interrelationships between components can be determined by the size and morphology of the triangle. The magnitude of the triangle is indicative of the relative impact of contamination. Equilateral triangles imply agreement among components. (USEPA, 1994)
Evaluation:
Advantages of triad approach The SQT approach has been praised for a variety of reasons as a technique for characterizing sediment conditions. Relative to the depth of information it provides, and the inclusive nature, it is very cost effective. It can be applied to all sediment classifications, and even adapted to soil and water column assessments (Chapman and McDonald 2005). A decision matrix can be employed such that all three measures be analyzed simultaneously, and a deduction of possible ecological impacts be made (USEPA 1994)Other advantages of the SQT include information on the potential bioaccumulation and biomagnifcation effects of contaminants, and its flexibility in application, resulting from its design as a framework rather than a formula or standard method. By using multiple lines of evidence, there are a host of ways to manipulate and interpret SQT data (Bay and Weisberg 2012). It has been accepted on an international scale as the most comprehensive approach to assessing sediment (Chapman and McDonald 2005). The SQT approach to sediment testing has been used in North America, Europe, Australia, South America, and the Antarctic.
Evaluation:
Application to sediment management standards Stemming from the National Pollutant Discharge Elimination System (NPDES) EPA permitting guidelines, point and nonpoint discharges may adversely affect sediment quality. As per state regulatory criteria, information on point and nonpoint source contamination, and its effects on sediment quality may be required for assessment of compliance. For example, Washington State Sediment Management Standards, Part IV, mandates sediment control standards which allow for establishment of discharge sediment monitoring requirements, and criteria for creation and maintenance of sediment impact zones (WADOE 2013). In this instance, the SQT could be particularly useful encompassing multiple relevant analyses simultaneously.
Evaluation:
Limitations and criticisms Although there are numerous benefits in using the SQT approach, drawbacks in its use have been identified. The major limitations include: lack of statistical criteria development within the framework, large database requirements, difficulties in chemical mixture application, and data interpretation can be laboratory intensive (Chapman 1989). The SQT does not evidently consider the bioavailability of complexed or sediment-associated contaminants (FDEP 1994). Lastly, it is difficult to translate laboratory toxicity results to biological effects seen in the field (Kamlet 1989). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mother Earth Mother Board**
Mother Earth Mother Board:
"Mother Earth Mother Board" is an essay by Neal Stephenson that appeared in Wired Magazine in December, 1996, on the subject of the history of undersea communication cables and a modern-day effort to lay the Fibre-optic Link Around the Globe. It was later reprinted in Some Remarks. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Alvis–Curtis duality**
Alvis–Curtis duality:
In mathematics, the Alvis–Curtis duality is a duality operation on the characters of a reductive group over a finite field, introduced by Charles W. Curtis (1980) and studied by his student Dean Alvis (1979). Kawanaka (1981, 1982) introduced a similar duality operation for Lie algebras.
Alvis–Curtis duality has order 2 and is an isometry on generalized characters.
Carter (1985, 8.2) discusses Alvis–Curtis duality in detail.
Definition:
The dual ζ* of a character ζ of a finite group G with a split BN-pair is defined to be ζ∗=∑J⊆R(−1)|J|ζPJG Here the sum is over all subsets J of the set R of simple roots of the Coxeter system of G. The character ζPJ is the truncation of ζ to the parabolic subgroup PJ of the subset J, given by restricting ζ to PJ and then taking the space of invariants of the unipotent radical of PJ, and ζGPJ is the induced representation of G. (The operation of truncation is the adjoint functor of parabolic induction.)
Examples:
The dual of the trivial character 1 is the Steinberg character.
Deligne & Lusztig (1983) showed that the dual of a Deligne–Lusztig character RθT is εGεTRθT.
The dual of a cuspidal character χ is (–1)|Δ|χ, where Δ is the set of simple roots.
The dual of the Gelfand–Graev character is the character taking value |ZF|ql on the regular unipotent elements and vanishing elsewhere. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BlueTEC**
BlueTEC:
BlueTEC is Mercedes-Benz Group's marketing name for engines equipped with advanced NOx reducing technology for vehicle emissions control in diesel-powered vehicles. The technology in BlueTec vehicles includes a selective catalytic reduction (SCR) system that uses diesel exhaust fluid, and a system of NOx adsorbers the automaker calls DeNOx, which uses an oxidizing catalytic converter and diesel particulate filter combined with other NOx reducing systems.
BlueTEC:
The BlueTEC was on the Ward's 10 Best Engines list for 2007 and 2008.In February 2016, Mercedes-Benz, Daimler AG, Bosch LLC and Bosch GmbH were sued by private plaintiffs alleging BlueTec violates standards in a manner similar to the Volkswagen emissions scandal. On December 6, 2016, U.S. District Judge Jose L. Linares dismissed the lawsuit without prejudice, finding the plaintiffs had alleged no standing.
BlueTEC:
The case was reinstated after Plaintiffs amended the complaint, and the litigation is ongoing. On July 12, 2021, the court granted final approval to the proposed class action settlement, which includes cash payments to previous and current owners, free retrofits to the cars' emissions systems, and extended emissions systems warranties for the affected models. A similar settlement was reached in Canada on February 2, 2022.
Uses:
Daimler introduced BlueTEC in the Mercedes E-Class (using the DeNOx system) and GL-Class (using SCR) at the 2006 North American International Auto Show. At that time, these BlueTEC vehicles were 45- and 50-state legal, respectively, in the United States (a 45-state vehicle does not meet the more stringent California emission standards that have also been adopted by four other states).
Uses:
Daimler AG has entered into an agreement with Volkswagen and Audi to share BlueTEC technology with them in order to increase the Diesel passenger-vehicle market in the United States. VW introduced the Jetta Clean TDI, the Tiguan concept, and the Touareg BlueTDI as part of the BlueTec licensing program. The Jetta and the Tiguan use NOx adsorbers, while the Touareg uses a Selective Catalytic Reduction catalytic converter.In August 2007 VW Group announced that cooperation on BlueTEC with Daimler AG would end. The reasoning for this change is due to the recognition of the VW TDI branding. VW did not want to use a competitor's branding for a product they would introduce into the market. VW developed their own system, but it failed and they re-programmed the engine control to show false values during pollution tests.By 2010 a BlueTEC version of the Mercedes Sprinter was released. The BlueTEC systems allowed the elimination of much of the EGR in that vehicle's engine, which as a result gives 188 horsepower (140 kilowatts) compared to the non-BlueTec engine's 154 horsepower (115 kilowatts).
Rationale:
The BlueTEC system was created because diesel engines, while more fuel efficient than gasoline engines, operate at lean air-fuel ratios, preventing them from implementing the highly-efficient three-way catalysts employed for NOx conversion in gasoline engines, which operate at stoichiometric air-fuel ratios. Limiting NOx by use of engine controls alone is possible, but requires a significant penalty to fuel economy. Tier 2 regulations in the US are 0.07 grams per mile of NOx, which is ⅛ of the 0.40 limit in the European Union.
Process:
The emissions system works in a series of steps: A diesel oxidation catalyst reduces the amounts of carbon monoxide (CO) and hydrocarbons (HC) released from the exhaust.
A DeNOx catalytic converter begins a preliminary removal of oxides of nitrogen.
A particulate filter traps and stores soot particles, burning them off when the filter gets full.
Process:
If the above are not sufficient to meet the prevailing emissions regulations, a Selective Catalytic Reduction (SCR) catalytic converter will convert the remaining nitrogen oxides to nitrogen and water; so-called diesel exhaust fluid (solution of urea and water) is injected into the exhaust gas stream to enable the conversion. In order to prevent vehicles from breaking emissions regulations, the engine may go into a limp-home-mode if the DEF tank is depleted; drivers are instructed to keep the tank refilled as necessary. Some commercial vehicles are equipped with a request or inhibit switch which allows the DEF injection to be "postponed" as it can reduce power output and increase temperatures temporarily; if the vehicle is climbing a grade, for example, it may be necessary to delay the cycle.
Emissions defeat device allegations:
The Netherlands' official automobile inspector TNO, on behalf of the Dutch Minister of the Environment, conducted an on-road test of a C-Class Mercedes C220 CDi BlueTec diesel and determined it emitted more than 40 times the amount of cancer-causing NOx than in the lab test. The tests were done at temperatures below 10 degrees Celsius (50 °F). Mercedes says it is permissible for the BlueTec engine to emit 40 times more NOx when the temperature is less than 10 °C (50 °F).As of April 22, 2016, Mercedes-Benz USA disclosed it is under investigation by the Department of Justice for potential discrepancies over its diesel emissions certifications, according to a Daimler statement. The DOJ effectively told MBUSA to begin an internal investigation "to review its certification and admissions process related to exhaust emissions in the United States," Daimler said. The company "has agreed to cooperate fully with the DOJ."In Feb 2018, German newspaper Bild am Sonntag reported that US authorities investigating Mercedes have discovered that its vehicles are equipped with illegal software to help them pass United States' stringent emission tests. The claimed defeat devices include a "Bit 15" mode to switch off emissions after 16 miles of driving (the length of an official U.S. emissions test), and "Slipguard" which tries to directly determine if the car is being tested based on speed and acceleration profiles. Bild am Sonntag said it found emails from Daimler engineers questioning whether those functions were legal. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Balance of system**
Balance of system:
The balance of system (BOS) encompasses all components of a photovoltaic system other than the photovoltaic panels. This includes wiring, switches, a mounting system, one or many solar inverters, a battery bank and battery charger.
Other optional components include renewable energy credit revenue-grade meter, maximum power point tracker (MPPT), GPS solar tracker, Energy management software, solar concentrators, solar irradiance sensors, anemometer, or task-specific accessories designed to meet specialized requirements for a system owner. In addition, concentrated photovoltaics systems require optical lenses or mirrors and sometimes a cooling system.
In addition, ground-mounted, large photovoltaic power station require equipment and facilities, such as grid connections, office facilities, and concrete. Land is sometimes included as part of the BOS as well.
Balance of Plant:
A similar term to Balance of System is “Balance of plant (BOP)” which is generally used in the context of power engineering and applies to all the supporting components and systems of the power plant which are needed to produce the energy. These may include suitable transformers, inverters, cabling, switching and control equipment, protection equipment, power conditioners, support structures, etc., depending on the type of plant.
Cost of BOS:
Cost of Balance of System will include the cost of the hardware (and software, if applicable), labor, permitting Interconnection and Inspection (PII) fees, and any other fees that may apply. For large commercial solar systems, the cost of BOS may include the cost of land and building, etc. The cost of BOS can be about two thirds of the total cost.
Downward Trend:
While the cost of solar panels is coming down appreciably, the cost of BOS is not showing the same rate of decline. It is understandable because extra effort has gone into the solar cell technology. The solar cell technology is still evolving and improving, and costs are being reduced fast. The balance of systems consists mostly of items which are not specific to solar technology. As an example, the mounting structures are quite usual and the technology may already be mature, benefitting little from further innovation and research. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Spatial gradient**
Spatial gradient:
A spatial gradient is a gradient whose components are spatial derivatives, i.e., rate of change of a given scalar physical quantity with respect to the position coordinates. Homogeneous regions have spatial gradient vector norm equal to zero.
When evaluated over vertical position (altitude or depth), it is called vertical gradient; the remainder is called horizontal gradient, the vector projection of the full gradient onto the horizontal plane.
Examples: BiologyConcentration gradient, the ratio of solute concentration between two adjoining regions Potential gradient, the difference in electric charge between two adjoining regionsFluid dynamics and earth scienceDensity gradient Pressure gradient Temperature gradient Geothermal gradient Sound speed gradient Wind gradient Lapse rate | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MVA85A**
MVA85A:
MVA85A (modified vaccinia Ankara 85A) is a vaccine against tuberculosis developed by researchers led by Professor Helen McShane at Oxford University. This vaccine produces higher levels of long-lasting cellular immunity when used together with the older TB vaccine BCG. Phase I clinical trials were completed and then phase II clinical trials took place in South Africa. Efficacy trials ran in parallel from 2009 to 2019.Results released in February 2013, were described as "disappointing", showing only a statistically insignificant prevention rate in infants.Results published in 2015, cast doubt on the efficacy of the vaccine.In 2018, a BMJ investigation raised concerns about the ethics of an efficacy trial in South African infants, particularly because of results from earlier animal trials such as a study with macaques at Porton Down. One response argued that 14 prior human trials showed a safety signal, that regulators were aware of the primate trial and decided to continue, and that three subsequent investigations found no evidence of wrong-doing. Another response by Ian Orme questioned the critique of animal models. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Moai (software)**
Moai (software):
Moai is a development and deployment platform designed for the creation of mobile games on iOS and Android smartphones. The Moai platform consists of Moai SDK, an open source game engine, and Moai Cloud, a cloud platform as a service (PaaS) for the hosting and deployment of game services. Moai developers use Lua, C++ and OpenGL, to build mobile games that span smartphones and cloud. Several commercial games have been built with Moai, including Crimson: Steam Pirates, Invisible, Inc., and Broken Age. Moai integrates third-party game analytics and monetization services such as Apsalar and Tapjoy.
History:
A public beta of Moai was launched in July 2011. The first Moai game to ship was Crimson: Steam Pirates, developed by Jordan Weisman and published by Bungie Aerospace in September 2011. The 1.0 release of Moai was announced in March 2012. As of 2017, the platform is no longer supported.
Notable games:
The following games use Moai.
Crimson: Steam Pirates Broken Age (Double Fine Adventure) Spacebase DF-9.
The Moron Test 2 Wolf Toss Lost in Paradise Invisible, Inc.
The Franz Kafka Videogame Eastward | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dust mask**
Dust mask:
A dust mask is a flexible paper pad held over the nose and mouth by elastic or rubber straps for personal comfort against non-toxic nuisance dusts. They are not intended to provide protection from toxic airborne hazards. The European FFP1 mask, the lowest-grade mechanical filter respirator available in the jurisdiction, is also used as a dust mask.
Dust mask:
Dust masks are used in environments with dusts encountered during construction or cleaning activities, such as dusts from drywall, brick, wood, fiberglass, silica (from ceramic or glass production), or sweeping. A dust mask can also be worn in environments with allergens such as tree and grass pollen. A dust mask is also used to prevent the wearer from inhaling dust or sand in a dust storm.
Description:
A dust mask is worn in the same fashion as a filtering facepiece respirator or surgical mask, but it is dangerous to confuse them because they each protect against specific airborne dangers. Using the wrong mask for a job can present a significant and possibly deadly danger as many dust masks with widely varied levels of protection may look similar, and even masks that do not protect against dust at all. Misfitting masks are also a danger as they allow a material to bypass the mask entirely. A correct fit may not be as critical in masks that are intended to protect against splattering liquids or mists. Dust masks do not protect against chemicals such as vapors and mists. For this reason, it is dangerous to confuse dust masks with respirators used as paint masks.
Description:
Dust masks are a cheaper, lighter, and possibly more comfortable alternative to respirators, but do not provide certified respiratory protection, and may be more susceptible to misuse or poor fit. Dust masks and respirators usually do not contact the mouth, and therefore interfere less with speech than cloth masks that do contact the mouth.
Description:
Some dust masks include improvements such as having two straps behind the head (one upper and one lower), having a strip of aluminum on the outside across the bridge of the nose that can be bent for a custom fit, and having a strip of foam rubber on the inside across the bridge of the nose to ensure a better seal even if the aluminum on the outside does not fit.
Description:
Any mask that consistently covers the nose and mouth will reduce the transmission of contagious respiratory diseases. Snugly fitting dust masks generally provide more protection than loose cloth masks, but less protection than respirators.
Regulation:
Some Asian countries have regulations for dust-grade masks intended for everyday civilian use as opposed to occupational use. These include: China, GB/T 32610:2016 – masks for daily protectionDust masks have been certified by the United States Bureau of Mines since the 1930s. Since 1970, the Occupational Safety and Health Administration approves dust masks, called a "filtering facepiece" in NIOSH jargon. A filtering facepiece is considered a type of respirator, and an N95 mask is a filtering facepiece, too. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**C. F. Palmer, Ltd**
C. F. Palmer, Ltd:
C. F. Palmer, Ltd was an independent manufacturer of scientific instruments, mostly in the field of physiology. Since 1987 it has been a subsidiary of Harvard Apparatus.The company was founded in London in 1891 by the English mechanical engineer and bicycle maker Charles Fielding Palmer (1864-1938). It described itself as making "Research and Students' Apparatus for Physiology, Pharmacology, Psychology, Bacteriology, Phonetics, Botany, etc." It specialized, however, in equipment for the relatively young science of physiology. As a result of good workmanship and excellent contacts with scientists, the company became an important supplier of physiology research equipment in the British Empire until ca. 1950.Palmer manufactured instruments like the kymograph, invented by the German physiologist Carl Ludwig in 1847, the Stromuhr (another design by Ludwig) for measuring the rate of bloodflow and a 'dotting machine', designed by William McDougall to measure and record levels of fatigue. From the 1930s onward, the company catalogue also mentioned equipment for research in psychometrics. At some time (its records were lost) the company became a "Ltd". In the 1960s and 1970s it stuck to mostly electromechanical devices in an increasingly electronic age and it lost some of its importance as an instrument maker. It was renamed PalmerBioscience and in 1987 it was acquired by Harvard Apparatus.Both the Museum of the History of Science in Oxford and the Science Museum in London own instruments by Palmer. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Halimadienyl-diphosphate synthase**
Halimadienyl-diphosphate synthase:
Halimadienyl-diphosphate synthase (EC 5.5.1.16, Rv3377c, halimadienyl diphosphate synthase, tuberculosinol diphosphate synthase, halima-5(6),13-dien-15-yl-diphosphate lyase (cyclizing)) is an enzyme with systematic name halima-5,13-dien-15-yl-diphosphate lyase (decyclizing). This enzyme catalyses the following chemical reaction geranylgeranyl diphosphate ⇌ tuberculosinyl diphosphateThis enzyme requires Mg2+ for activity. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Germacrene A alcohol dehydrogenase**
Germacrene A alcohol dehydrogenase:
Germacrene A alcohol dehydrogenase (EC 1.1.1.314) is an enzyme with systematic name germacra-1(10),4,11(13)-trien-12-ol:NADP+ oxidoreductase. This enzyme catalyses the following chemical reaction germacra-1(10),4,11(13)-trien-12-ol + 2 NADP+ + H2O ⇌ germacra-1(10),4,11(13)-trien-12-oate + 2 NADPH + 3 H+ (overall reaction) (1a) germacra-1(10),4,11(13)-trien-12-ol + NADP+ ⇌ germacra-1(10),4,11(13)-trien-12-al + NADPH + H+ (1b) germacra-1(10),4,11(13)-trien-12-al + NADP+ + H2O ⇌ germacra-1(10),4,11(13)-trien-12-oate + NADPH + 2 H+In Lactuca sativa this enzyme is a multifunctional enzyme with EC 1.14.13.123. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Oculocerebrocutaneous syndrome**
Oculocerebrocutaneous syndrome:
Oculocerebrocutaneous syndrome is a condition characterized by orbital cysts, microphthalmia, porencephaly, agenesis of the corpus callosum, and facial skin tags.
Presentation:
These include Skin lesions Hypoplastic or aplastic skin defects Pedunculated, hamartomatous or nodular skin appendages Eye lesions Cystic microphthalmia Brain lesions Forebrain anomalies Agenesis of the corpus callosum Enlarged lateral ventricles Interhemispheric cysts Hydrocephalus Polymicrogyria Periventricular nodular heterotopia Mid-hindbrain malformation Giant dysplastic tectum Absent cerebellar vermis Small cerebellar hemispheres Large posterior fossa fluid collections
Genetics:
This is not understood but it is suspected that the gene(s) responsible may lie on the X chromosome.
Diagnosis:
Differential diagnosis Aicardi syndrome Encephalocraniocutaneous lipomatosis Focal dermal hypoplasia Oculo-auriculo-vertebral spectrum
Epidemiology:
This is a rare condition with only 26 cases diagnosed by 2005. There is a marked male preponderance. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Wetted area**
Wetted area:
In fluid dynamics, the wetted area is the surface area that interacts with the working fluid or gas.
In maritime use, the wetted area is the area of the watercrafts hull which is immersed in water. This has a direct relationship on the overall hydrodynamic drag of the ship or submarine.
In aeronautics, the wetted area is the area which is in contact with the external airflow. This has a direct relationship on the overall aerodynamic drag of the aircraft. See also: Wetted aspect ratio.
In motorsport, such as Formula One, the term wetted surfaces is used to refer to the bodywork, wings and the radiator, which are in direct contact with the airflow, similarly to the term's use in aeronautics. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Assistive cane**
Assistive cane:
An assistive cane is a walking stick used as a crutch or mobility aid. A cane can help redistribute weight from a lower leg that is weak or painful, improve stability by increasing the base of support, and provide tactile information about the ground to improve balance. In the US, ten percent of adults older than 65 years use a cane, and 4.6 percent use walkers.In contrast to crutches, canes are generally lighter, but, because they transfer the load through the user's unsupported wrist, are unable to offload equal loads from the legs.
Assistive cane:
Another type of crutch is the walker, a frame held in front of the user and which the user leans on during movement. Walkers are more stable due to their increased area of ground contact, but are larger and less wieldy and, like canes, pass the full load through the user's wrists in most cases.
Parts of medical canes:
The basic cane has four parts. These parts vary depending on the design of the cane and the needs of the user.
Parts of medical canes:
Handle. The handle of a cane is extremely important to the user. Many different styles exist, the most common traditional designs are the Tourist (or Crook) handle, the Fritz Handle and the Derby Handle. Ergonomically shaped handles have become increasingly common for canes intended for medical use, both increasing the comfort of the grip for the user (particularly important for users with disabilities which also affect their hands or wrists), and better transmitting the load from the user's hand and arm into the shaft.
Parts of medical canes:
Collar. The collar of a cane may be only a decorative addition made for stylistic reasons, or may form the structural interface between shaft and handle.
Shaft. The shaft of the cane transmits the load from the handle to the ferrule and may be constructed from carbon fiber polymer, metal, composites, or traditional wood.
Parts of medical canes:
Ferrule. The tip of a cane provides traction and added support when the cane is used at an angle. Many kinds of ferrules exist, but most common is a simple, ridged rubber stopper. Users can easily replace a ferrule with one that better suits their individual needs.Modern canes may differ from the traditional fixed structure. For instance, a quad cane has a base attached to the shaft that provides increased stability by having four ferrules, and an adjustable cane may have two shaft segments telescoping one inside the other to allow adjustment for multiple sizes.
Parts of medical canes:
All cane users who need a walking cane for medical reasons should consult a medical professional before choosing the style that best suits them. It is particularly important that the cane be the proper height for the individual user.
Types of canes:
White cane: specifically designed for assisting the visually impaired, these are longer and thinner and allow the user to "feel" the path ahead. They also alert others, such as motorists, that the user is blind and should be regarded with caution. In the UK, red banding on a white cane indicates a deaf-blind user.
Folding cane: has several joints, generally linked by an internal elastic cord, enabling them to be folded into a shorter length when not in use.
Forearm cane: a regular or offset cane with additional forearm support, enabling increased stability and load shifted from the wrist to the forearm.
Quad cane: has four ferrules at the base, enabling them to stand freely, and offering a more firm base for standing.
Tripod cane: opens in a tripod fashion. Often available with an attached seat.
Adjustable cane: features two or more shaft pieces for a telescoping effect that allows the user to lengthen or shorten their walking cane to fit to size. This feature can be combined with other variations.
Shillelagh: a cane made of blackthorn wood, originating in Ireland and still a recognized symbol thereof.
Accessories:
The most common accessory is a hand strap, to prevent loss of the stick should the hand release its grip. These aides are then threaded through a hole drilled into the stick rather than tied around.
A clip-on frame or similar device can be used to hold a stick against the top of a table.
In cold climates, a metallic cleat may be added to the foot of the cane. This dramatically increases traction on ice. The device is usually designed so it can easily rotate to the side to prevent damage to indoor flooring.
Different handles are available to better match the size of the user's hands and their medical needs.
Rubber ferrules give extra traction on most surfaces.
Handedness:
Canes are generally held in the hand that is opposite to the side of the injury or weakness. It allows the cane to be used for stability in a way that lets the user focus much of their weight away from their weaker side and onto the cane. This prevents the person's center of balance from swaying from side to side as they walk. It also allows for fluid movement that better matches walking, as the hand on the opposite side of the leg generally sways forward in normal human locomotion . Personal preference, or a need to hold the cane in the dominant hand means some cane users choose to hold the cane on the same side as the affected leg. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Advanced Digital Recording**
Advanced Digital Recording:
Advanced Digital Recording (ADR) is a magnetic tape data storage format developed by OnStream from 1998 to 2003. Since the demise of OnStream, the format has been orphaned. ADR is an 8-track, linear tape format.
Compatibility:
The drive models for ADR 120 GB tapes can use both the ADR 60 GB and the ADR 120 GB tapes, while the 50 GB drives can use both ADR 30 GB and ADR 50 GB tapes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mora (linguistics)**
Mora (linguistics):
A mora (plural morae or moras; often symbolized μ) is a basic timing unit in the phonology of some spoken languages, equal to or shorter than a syllable. For example, a short syllable such as ba consists of one mora (monomoraic), while a long syllable such as baa consists of two (bimoraic); extra-long syllables with three moras (trimoraic) are relatively rare. Such metrics are also referred to as syllable weight.
Mora (linguistics):
The term comes from the Latin word for 'linger, delay', which was also used to translate the Greek word χρόνος : chrónos ('time') in its metrical sense.
Formation:
The general principles for assigning moras to segments are as follows (see Hayes 1989 and Hyman 1985 for detailed discussion): A syllable onset (the first consonant or consonants of the syllable) does not represent any mora.
The syllable nucleus represents one mora in the case of a short vowel, and two morae in the case of a long vowel or diphthong. Consonants serving as syllable nuclei also represent one mora if short and two if long. Slovak is an example of a language that has both long and short consonantal nuclei.
Formation:
In some languages (for example, Latin and Japanese), the coda represents one mora, and in others (for example, Irish) it does not. In English, the codas of stressed syllables represent a mora (thus, the word cat is bimoraic), but for unstressed syllables it is not clear whether this is true (the second syllable of the word rabbit might be monomoraic).
Formation:
In some languages, a syllable with a long vowel or diphthong in the nucleus and one or more consonants in the coda is said to be trimoraic (see pluti).In general, monomoraic syllables are called "light syllables", bimoraic syllables are called "heavy syllables", and trimoraic syllables (in languages that have them) are called "superheavy syllables". Some languages, such as Old English and present-day English, can have syllables with up to four morae.A prosodic stress system in which moraically heavy syllables are assigned stress is said to have the property of quantity sensitivity.
Languages:
Ancient Greek For the purpose of determining accent in Ancient Greek, short vowels have one mora, and long vowels and diphthongs have two morae. Thus long ē (eta: η) can be understood as a sequence of two short vowels: ee.Ancient Greek pitch accent is placed on only one mora in a word. An acute (έ, ή) represents high pitch on the only mora of a short vowel or the last mora of a long vowel (é, eé). A circumflex (ῆ) represents high pitch on the first mora of a long vowel (ée).
Languages:
English In Old English, short diphthongs and monophthongs were monomoraic, long diphthongs and monophthongs were bimoraic, consonants ending a syllable were each one mora, and geminate consonants added a mora to the preceding syllable. In Modern English, the rules are similar, except that all diphthongs are bimoraic. In English, and probably also in Old English, syllables cannot have more than four morae, with loss of sounds occurring if a syllable would have more than 4 otherwise. From the Old English period through to today, all content words must be at least two morae long.
Languages:
Gilbertese Gilbertese, an Austronesian language spoken mainly in Kiribati, is a trimoraic language. The typical foot in Gilbertese contains three morae. These trimoraic constituents are units of stress in Gilbertese. These "ternary metrical constituents of the sort found in Gilbertese are quite rare cross-linguistically, and as far as we know, Gilbertese is the only language in the world reported to have a ternary constraint on prosodic word size." Hawaiian In Hawaiian, both syllables and morae are important. Stress falls on the penultimate mora, though in words long enough to have two stresses, only the final stress is predictable. However, although a diphthong, such as oi, consists of two morae, stress may fall only on the first, a restriction not found with other vowel sequences such as io. That is, there is a distinction between oi, a bimoraic syllable, and io, which is two syllables.
Languages:
Japanese Most dialects of Japanese, including the standard, use morae, known in Japanese as haku (拍) or mōra (モーラ), rather than syllables, as the basis of the sound system. Writing Japanese in kana (hiragana and katakana) is said by those scholars who use the term mora to demonstrate a moraic system of writing. For example, in the two-syllable word mōra, the ō is a long vowel and counts as two morae. The word is written in three symbols, モーラ, corresponding here to mo-o-ra, each containing one mora. Therefore, scholars argue that the 5/7/5 pattern of the haiku in modern Japanese is of morae rather than syllables.
Languages:
The Japanese syllable-final n is also said to be moraic, as is the first part of a geminate consonant. For example, the Japanese name for Japan, 日本, has two different pronunciations, one with three morae (Nihon) and one with four (Nippon). In the hiragana spelling, the three morae of Ni-ho-n are represented by three characters (にほん), and the four morae of Ni-p-po-n need four characters to be written out as にっぽん. The latter can also be analysed as Ni-Q-po-n, with the Q representing a full mora of silence. In this analysis, っ (the sokuon) indicates a one-mora period of silence.
Languages:
Similarly, the names Tōkyō (To-u-kyo-u, とうきょう), Ōsaka (O-o-sa-ka, おおさか), and Nagasaki (Na-ga-sa-ki, ながさき) all have four morae, even though, on this analysis, they can be said to have two, three and four syllables, respectively. The number of morae in a word is not always equal to the number of graphemes when written in kana; for example, even though it has four morae, the Japanese name for Tōkyō (とうきょう) is written with five graphemes, because one of these graphemes (ょ) represents a yōon, a feature of the Japanese writing system that indicates that the preceding consonant is palatalized.
Languages:
The "contracted sound" (拗音) is represented by the three small kana for ya (ゃ), yu (ゅ), yo (ょ). These do not represent a mora by themselves and attach to other kana; all the rest of the graphemes represent a mōra on their own.
Languages:
There is a unique set of mōra known as "special mora" (特殊拍) which cannot be pronounced by itself but still counts as one mora whenever present. These consist of "nasal sound" (撥音) represented by the kana for n (ん), the "geminate consonant" (促音) represented by the small tsu (っ), the "long sound" (長音) represented by the long vowel symbol (ー) or a single vowel which extends the sound of the previous mōra (びょ「う」いん) and the "diphthong" (二重母音) represented by the second vowel of two consecutive vowels (ばあ「い」).This set also has the peculiarity that the drop in pitch of a word (so-called "downstep") can not fall on any of these "special mora" under any conditions, which is especially useful for learners of the language trying to learn the accent of words.
Languages:
The above rule does not apply to ん (the nasal n), which for the Japanese does not qualify as special. The drop in pitch can fall on ん, for example in the word 日本 (にほん / nihon), where に starts low, the pitch raises and peaks at ほ, then drops at ん and continues low through the following particle if it is present.
Languages:
Luganda In Luganda, a short vowel constitutes one mora while a long vowel constitutes two morae. A simple consonant has no morae, and a doubled or prenasalised consonant has one. No syllable may contain more than three morae. The tone system in Luganda is based on morae. See Luganda tones and Luganda grammar.
Languages:
Sanskrit In Sanskrit, the mora is expressed as the mātrā. For example, the short vowel a (pronounced like a schwa) is assigned a value of one mātrā, the long vowel ā is assigned a value of two mātrās, and the compound vowel (diphthong) ai (which has either two simple short vowels, a+i, or one long and one short vowel, ā+i) is assigned a value of two mātrās. In addition, there is plutham (trimoraic) and dīrgha plutham ('long plutham' = quadrimoraic).
Languages:
Sanskrit prosody and metrics have a deep history of taking into account moraic weight, as it were, rather than straight syllables, divided into laghu (लघु, 'light') and dīrgha/guru (दीर्घ/गुरु, 'heavy') feet based on how many morae can be isolated in each word. Thus, for example, the word kartṛ (कर्तृ), meaning 'agent' or 'doer', does not contain simply two syllabic units, but contains rather, in order, a dīrgha/guru foot and a laghu foot. The reason is that the conjoined consonants rt render the normally light ka syllable heavy. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OneGeology**
OneGeology:
OneGeology is an international collaborative project in the field of geology supported by 118 countries, UNESCO, and major global geoscience bodies. It is an International Year of Planet Earth flagship initiative that aims to enable online access to dynamic digital geological map of the world for everyone. The project uses the GeoSciML markup language and initially targets a scale of approximately 1:1 million. Downstream uses could be to identify areas suitable for mining, oil and gas exploration or areas at risk from landslides or earthquakes, to help understanding of formations which store groundwater for drinking or irrigation, and to help locate porous rocks suitable for burying emissions of greenhouse gases. The project portal was launched on August 6, 2008 at the 33rd International Geological Congress (IGC) in Oslo, Norway. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Internet metaphors**
Internet metaphors:
Internet metaphors provide users and researchers of the Internet a structure for understanding and communicating its various functions, uses, and experiences. An advantage of employing metaphors is that they permit individuals to visualize an abstract concept or phenomenon with which they have limited experience by comparing it with a concrete, well-understood concept such as physical movement through space. Metaphors to describe the Internet have been utilized since its creation and developed out of the need for the Internet to be understood by everyone when the goals and parameters of the Internet were still unclear. Metaphors helped to overcome the problems of the invisibility and intangibility of the Internet's infrastructure and to fill linguistic gaps where no literal expressions existed.
Internet metaphors:
"Highways, webs, clouds, matrices, frontiers, railroads, tidal waves, libraries, shopping malls, and village squares are all examples of metaphors that have been used in discussions of the Internet." Over time these metaphors have become embedded in cultural communications, subconsciously shaping the cognitive frameworks and perceptions of users who guide the Internet's future development. Popular metaphors may also reflect the intentions of Internet designers or the views of government officials. Internet researchers tend to agree that popular metaphors should be re-examined often to determine if they accurately reflect the realities of the Internet, but many disagree on which metaphors are worth keeping and which ones should be left behind.
Overview:
Internet metaphors guide future action and perception of the Internet's capabilities on an individual and societal level. Internet metaphors are contestable and sometimes may present political, educational, and cognitive issues. Tensions between producer and user, commercial and non-commercial interests, and uncertainty regarding privacy all influence the shape these metaphors take.Common Internet metaphors such as the information superhighway are often criticized for failing to adequately reflect the reality of the Internet as they emphasize the speed of information transmission over the communal and relationship building aspects of the Internet. Internet researchers from a variety of disciplines are engaged in the analysis of metaphors across many domains in order to reveal their impact on user perception and determine which metaphors are best suited for conceptualizing the Internet. Results of this research have become the focus of a popular debate on which metaphors should be applied in political, educational, and commercial settings as well as which aspects of the Internet remain unaccounted for with current metaphors, limiting the scope of users understanding.Metaphors of the Internet often reveal the intentions of designers and industry spokespeople. "For instance, those who use metaphors of consumption and shopping malls will devote resources to developing secure exchange mechanisms. Broadcasting metaphors carry with them assumptions about the nature of interactions between audiences and content providers that are more passive than those suggested by interactive game metaphors and applications. Computer security experts deploy metaphors that invoke fear, anxiety, and apocalyptic threat" (Wyatt, 2004, p. 244). The extent to which the Internet is understood across individuals and groups determines their ability to navigate and build Web sites and social networks, attend online school, send e-mail, and a variety of other functions. Internet metaphors provide a comprehensive picture of the Internet as a whole as well as describe and explain the various tools, purposes, and protocols that regulate the use of these communication technologies.Without the use of metaphors the concept of the Internet is abstract and its infrastructure difficult to comprehend. When it was introduced, the Internet created a linguistic gap as no literal expressions existed to define its functions and properties. Internet metaphors arose out of this predicament so that it could be adequately described and explained to the public. Essentially all language now used to communicate about the Internet is of a metaphorical nature, although users are often unaware of this reality because it is embedded in a cultural context that is widely accepted. There are several types of metaphors that serve various purposes and can range from describing the nature of online relationships, modeling the Internet visually, to the specific functions of the Internet as a tool. Each metaphor has implications for the experience and understanding of the Internet by its users and tends to emphasize some aspects of the Internet over others. Some metaphors emphasize space (Matlock, Castro, Fleming, Gann, & Maglio, 2014).
Popular culture:
Common recurring themes regarding the Internet appear in popular media and reflect pervasive cultural attitudes and perceptions. Although other models and constructed metaphors of the Internet found in scholarly research and theoretical frameworks may be more accurate sources on the effects of the Internet, mass media messages in popular culture are more likely to influence how people think about and interact with the Internet.The very first metaphor to describe the Internet was the World Wide Web, proposed in 1989. However, uncertainty surrounding the structure and properties of the Internet was apparent in the newspapers of the 1990s that presented a vast array of contradicting visual models to explain the Internet. Spatial constructs were utilized to make the Internet appear as a tangible entity placed within a familiar geographical context. A popular metaphor adopted around the same time was cyberspace, coined by William Gibson in his novel Neuromancer to describe the world of computers and the society that gathers around them.Howard Rheingold, an Internet enthusiast of the 1990s, propagated the metaphor of virtual communities and offered a vivid description of the Internet as "...a place for conversation or publication, like a giant coffee-house with a thousand rooms; it is also a world-wide digital version of the Speaker's Corner in London's Hyde Park, an unedited collection of letters to the editor, a floating flea market, a huge vanity publisher, and a collection of every odd-special interest group in the world" (Rheingold 1993, p. 130).In 1991, Al Gore's choice to use the information superhighway as a metaphor shifted perceptions of the Internet as a communal enterprise to an economic model that emphasized the speed of information transmission. While this metaphor can still be found in popular culture, it has generally been dropped in favor of other metaphors due to its limited interpretation of other aspects of the Internet such as social networks. The most common types of metaphors in usage today relate to either social or functional aspects of the Internet or representations of its infrastructure through visual metaphors and models.
Popular culture:
Social metaphors Internet metaphors frequently arise from social exchanges and processes that occur online and incorporate common terms that describe offline social activities and realities. These metaphors often point to the fundamental elements that make up social interactions, even though online interactions differ in significant ways from face-to-face communication. Therefore, social metaphors tend to communicate more about the values of society rather than the technology of the Internet itself.Metaphors such as the electronic neighborhood and virtual community point to ways in which individuals connect to others and build relationships by joining a social network. Global village is another metaphor that evokes the imagery of closeness and interconnectedness that might be found in a small village, but is applied to the worldwide community of Internet users. However, the global village metaphor has been criticized for suggesting that the entire world is connected by the Internet as the continued existence of social divides prevent many individuals from accessing the Internet.The electronic frontier metaphor conceptualizes the Internet as a vast unexplored territory, a source of new resources, and a place to forge new social and business connections. Similar to the American ideology of the Western Frontier, the electronic frontier invokes the image of a better future to come through new opportunities afforded by the Internet. The Electronic Frontier Foundation is a non-profit digital rights group that adopted the use of this metaphor to denote their dedication to the protection of personal freedoms and fair use within the digital landscape. Social metaphors and their pervasive influence indicate the increasing importance placed on social interaction on the Internet.
Popular culture:
Functional metaphors Functional metaphors of the Internet shape our understanding of the medium itself and give us clues as to how we should actually use the Internet and interpret its infrastructure for design and policy making. These exist at the level of the Internet as a whole, at the level of a website, and the level of individual pages. The majority of these types of metaphors are based on the concept of various spaces and physical places; therefore, most are considered spatial metaphors. However, this aspect should not be considered the only defining feature of a functional metaphor as social metaphors are often spatial in nature.
Popular culture:
Cyberspace is the most widely used spatial metaphor of the Internet and the implications of its use can be seen in the Oxford English Dictionary definition, which denotes cyberspace as a space within whose boundaries digital communications take place. The implications of this spatial metaphor in discourse on law can be seen in instances where the application of traditional laws governing real property are applied to Internet spaces. However, arguments against this type of ruling have claimed that the Internet is a borderless space, which should not be subject to the laws applied to places. Others have argued that the Internet is in fact a real space not sealed from the real world and can be zoned, trespassed upon, or divided up into holdings like real property.
Popular culture:
Other functional metaphors are based on travel within space, such as surfing the Net, which suggests that the Internet is similar to an ocean. Mark McCahill coined 'surfing the internet' in an analogy with browsing a library shelf as an information space. Websites indicate components of a space, which are static and fixed, whereas webpages suggest pages of a book. Similarly, focal points of the Internet structure are called nodes. Home pages, chat rooms, windows, and the idea that one can jump from one page to the next also invoke spatial imagery that guide the functions that users perform on the Internet. Other metaphors refer to the Internet as another dimension beyond typical spaces, such as portals and gateways, which refer to access and communication functions. Firewalls invoke the image of physically blocking the incoming of information such as viruses and pop-up ads.Designers of computer systems often use spatial metaphors as a way of controlling the complexity of interfaces. Designers create actions, procedures, and concepts of systems based on similar actions, procedures, and concepts of other domains such as physical spaces so that they will be familiar to users. In designing hypertext, a system that links topics on a screen to related information, navigational metaphors such as landmarks, routes, and way-finding have often been implemented for users' ease of understanding how hypertext functions.
Popular culture:
Visual metaphors Visual metaphors are popular in conceptualizing the Internet and are often deployed in commercial promotions through visual media and imagery. The most common visual metaphor is a network of wires with nodes and route lines plotted on a geographically based map. However, maps of Internet infrastructure produced for network marketing are rarely based on actual pathways of wires and cable on the ground, but are instead based on circuit diagrams similar to those seen on subway maps. The globe, or the Earth viewed from space, with network arcs of data flow wrapped around it, is another dominant metaphor for the Internet in Western contexts and is connected with the metaphor of the global village. Many abstract visual metaphors based on organic structures and patterns are found in literature on the Internet's infrastructure. Often, these metaphors are used as a visual shorthand in explanations as they allow one to refer to the Internet as a definite object without having to explain the intricate details of its functioning. Clouds are the most common of abstract metaphors employed for this purpose in cloud computing and have been used since the creation of the Internet. Other abstract metaphors of the Internet draw on the fractal branching of trees and leaves, and the lattices of coral and webs, while others are based on the aesthetics of astronomy such as gas nebulas, and star clusters.Technical methods such as algorithms are often used to create huge, complex graphs or maps of raw data from networks and the topology of connections. The typical result of this process are visual representations of the Internet that are elaborate and visually striking, resembling organic structures. These artistic, abstract representations of the Internet have been featured in art galleries, sold as wall posters, used on book covers, and have been claimed to be a picture of the whole Internet by many fans. However, there are no instructions on how these images may be interpreted. The main function of these representations has sometimes been explained as a metaphor for the complexity of the Internet. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Metizolam**
Metizolam:
Metizolam (also known as desmethyletizolam) is a thienotriazolodiazepine that is the demethylated analogue of the closely related etizolam.
Legal status:
Following its sale as a designer drug, metizolam was classified as controlled substance in Sweden on 26. January 2016. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Interview (journalism)**
Interview (journalism):
A journalistic interview takes the form of a conversation between two or more people: interviewer(s) ask questions to elicit facts or statements from interviewee(s). Interviews are a standard part of journalism and media reporting. In journalism, interviews are one of the most important methods used to collect information, and present views to readers, listeners, or viewers.
History:
Although the question-and-answer interview in journalism dates back to the 1850s, the first known interview that fits the matrix of interview-as-genre has been claimed to be the 1756 interview by Archbishop Timothy Gabashvili (1704–1764), prominent Georgian religious figure, diplomat, writer and traveler, who was interviewing Eugenios Voulgaris (1716–1806), renowned Greek theologian, Rector of Orthodox School of Mount Athos.
Publications:
Several publications give prominence to interviews, including: Interviews with novelists conducted since 1940 by The Paris Review Interviews with celebrities conducted by Interview magazine, co-founded by Andy Warhol in 1969 The Rolling Stone Interview, featured in Rolling Stone magazine
Famous interviews:
1957–1960: The Mike Wallace Interview – 30-minute television program interviews conducted by Mike Wallace 1968: Interviews with Phil Ochs – an interview of folk singer Phil Ochs conducted by Broadside Magazine 1974: Michael Parkinson/Muhammad Ali – television interview of Ali in his prime 1977: The Nixon Interviews – 1977 television interviews by British journalist David Frost of former United States President Richard Nixon early 1980s: Soviet Interview Project – conducted with Soviet emigrants to the United States 1992: Fellini: I'm a Born Liar – Federico Fellini's last filmed interviews conducted in 1992 for a 2002 feature documentary 1992: Nevermind It's an Interview – interviews with the band Nirvana recorded in 1992 on the night they appeared on Saturday Night Live 1993: Michael Jackson talks to Oprah Winfrey. This became the fourth most watched event in American television history as well as the most watched interview ever, with an audience of one hundred million.
Famous interviews:
1993: Birthday cake interview – an interview of Dr. John Hewson that contributed to the defeat of his party in the 1993 Australian federal election 2002–2003: Living with Michael Jackson – a 2002–2003 interview with Michael Jackson, later turned into a documentary 2003: February 2003 Saddam Hussein interview – Dan Rather interviewing Saddam Hussein days before the 2003 invasion of Iraq 2008: Sarah Palin interviews with Katie Couric – Katie Couric interviewing Sarah Palin 2020: Chris Dailey interviews Shaquille O'Neal – Chris Dailey interviewing Shaquille O'Neal | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Subterrene**
Subterrene:
A subterrene (Latin: subterrina, Russian: Подземная лодка) is a vehicle that travels underground (through solid rock or soil) much as a submarine travels underwater, either by mechanical drilling, or by melting its way forward. Subterrenes existed first in fiction as mechanical drillers, with real-world thermal designs and examples following in the second half of the 20th century.
Subterrene:
Fictional subterrenes are often depicted as cylindrical in shape with conical drill heads at one or both ends, sometimes with some kind of tank-tread for propulsion, and described either as leaving an empty tunnel behind them, or as filling the space behind it with mining debris such as the Thunderbirds Mole. The plausibility of such machines has declined with the advent of the real-world tunnel boring machines, which demonstrate the reality of the boring task. Tunnel boring machine themselves are not usually considered to be subterrenes, possibly because they lack the secondary attributes – mobility and independence – that are normally applied to vehicles.
Subterrene:
A real-world, mobile subterrene must work thermally, using very high temperature and immense pressure to melt and push through rock. The front of the machine is equipped with a stationary drill tip which is kept at 700–930 °C (1,300–1,700 °F). The molten rock is pushed around the edges as the vehicle is forced forward, and cools to a glass-like lining of the tunnel. Massive amounts of energy are required to heat the drill head, supplied via nuclear power or electricity. Patents issued in the 1970s indicate that U.S. scientists had planned to use nuclear power to liquefy lithium metal and circulate it to the front of the machine (drill). An onboard nuclear reactor can permit a truly independent subterrene, but cooling the reactor is a difficult problem. The Soviet Union is purported to have built such a "battle mole", which operated until its onboard reactor failed.Online information presents research that was funded by the United States Government for the Los Alamos Scientific Laboratories University of California, Los Alamos, New Mexico for a project Camelot under the heading Systems and Cost Analysis for a Nuclear Subterrene Tunneling Machine. A patent was subsequently issued under number 3,693,731 on 26 September 1972.The design concepts fall into similar designs of current technology of the nuclear submarine fleet and existing tunnel boring technology as used in the Channel Tunnel between England and France, but with the added feature of melting rock for the tunnel wall lining.
Advantages:
In theory, tunnels can be built much more cheaply and quickly because of their reduced complexity, equipment costs and operational overhead.
A smooth glass-lined tunnel wall is created as a result of the process. This can further reduce costs and provides an insulating barrier and basic support structure.
Disadvantages:
These machines inherently use a very large amount of energy.
Unknown safety and performance records The soil still has to be removed from the tunnel. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**OR10T2**
OR10T2:
Olfactory receptor 10T2 is a protein that in humans is encoded by the OR10T2 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acoustic plaster**
Acoustic plaster:
Acoustic plaster is plaster which contains fibres or aggregate so that it absorbs sound. Early plasters contained asbestos, but newer ones consist of a base layer of absorptive substrate panels, which are typically mineral wool, or a non-combustible inorganic blow-glass granulate. A first finishing layer is then applied on top of the substrate panels, and sometimes a second finishing layer is added for greater sound attenuation. Pre-made acoustic panels are more commonly used, but acoustic plaster provides a smooth and seamless appearance, and greater flexibility for readjustment. The drawback is the greater level of skill required in application. Proprietary types of acoustic plaster developed in the 1920s included Macoustic Plaster, Sabinite, Kalite, Wyodak, Old Newark and Sprayo-Flake produced by companies such as US Gypsum.
Basic composition:
Acoustic plasters are aesthetically favorable because it allows for seamless smooth application compared to panels that show visible joints. Some acoustic plasters contain aggregate, but better systems incorporate fiber. Acoustic plasters are generally applied at a thickness between 1/16” and 1.5”. Acoustic plasters consist of a base layer of absorptive substrate panels, which are typically mineral wool, or a non-combustible inorganic blow-glass granulate. A first finishing layer is then applied on top of the substrate panels and when dried, produces a first layer of sound attenuation. A second finishing layer may also be added to create a second system of sound attenuation. If the second density if less than or equal to that of the first layer, the sound attenuation of the second layer would be greater than the first and vice versa. This allows for the flexibility of changing the acoustic properties of the space.
Acoustic properties:
Acoustic plasters can significantly reduce the sound reverberation within the space. Most acoustic plasters have a Noise Reduction Coefficient between 0.5 and 1.00. The Noise Reduction Coefficient (NRC) determines the ability of a material to reflect or absorb sound. It is a number between 0 and 1, which 0 being perfectly reflective and 1 being perfectly absorptive. The application of acoustic plasters helps to increase the intelligibility of voice, music, and other sounds under desirable environment. In addition, acoustic plasters are also fireproof and LEED rated. However, it can be more fragile, being affected by physical stress and humidity.
Advantage:
Compared to acoustic plaster, acoustic panels are the most commonly used material for controlling the sound environment of space. Acoustic panels were often made of a mineral wool composition that is very absorbent of sound. Although acoustic panels are common in basements or recreational areas, they are seldom used in living spaces due to aesthetic reasons. Instead, conventional plaster or drywall systems were more frequently used in homes and other environments where interior aesthetics is a more important consideration but these are, however, not ideal in sound absorption. Limitations of acoustic panels or conventional drywall systems also affect the flexibility of room configuration and uses. Alternating the acoustic properties in order to address changing room functions would indicate changing the entire acoustic system, which is costly and time-consuming. In contrast, acoustic plasters provide a smooth applicable and a seamless appearance. It also allows greater flexibility for readjustment.
Application:
Despite its advantage, acoustic plaster requires high levels of artisan skills in order to obtain the rated sound absorption coefficients. The proportions and recommended mixing time of the plaster must be followed strictly in order to achieve the desired result. To ensure seamless surface, it is recommended to start with a ceiling that is perfectly level. The absorptive substrate panels are then attached with its seams filled and sanded smooth. Layers of plaster coating are then applied to achieve a seamless smooth surface. Acoustic plasters may be worked to produce different surface textures but must be done timely after the application.Different types of mounting styles for acoustic plasters can also affect the acoustic performance of the system. These mounting types include direct to substrate, suspended or direct to framing, or a plaster only system that can be sprayed on directly to the substrate without the placement of any acoustical boards. Control joints may also be built into the system to prevent cracks within the plaster.Acoustic plaster is used in construction of rooms which require good acoustic qualities such as auditorium and libraries. Proprietary types of acoustic plaster developed in the 1920s included Macoustic Plaster, Sabinite, Kalite, Wyodak, Old Newark and Sprayo-Flake produced by companies such as US Gypsum. These superseded felts and quilts as a common preference of architects but were difficult to apply and so were superseded in turn by acoustic tiles.
Examples:
Institute for Contemporary Art at the Virginia Commonwealth University The Institute for Contemporary Arts is a non-collecting contemporary art institution designed by Steven Holl Architects and located on the Virginia Commonwealth University campus in Richmond. The design by Steven Holl Architects emphasizes the fluidity of interior and exterior space and fosters the connection between technology and natural resources. The design, however, creates reverberant sounds that disturb the experience within the museum. Acoustic plaster was used as a remedy to address the sound environment without compromising design. The application of acoustic plaster significantly reduced the sound reverberation, especially in the 33-foot tall central forum, where or echoing would otherwise occur due to the high ceiling.
Examples:
Auburn University Jule Collins Smith Museum of Fine Art The Jule Collins Smith Museum of Fine Art is located at the Auburn University and features a collection of 20,000 pieces of art. As a museum originally constructed in 2003, the acoustics of the building, which were experiencing problems due to long reverberation times that made conversations within the space even unintelligible, needed some significant upgrades. Acoustic plaster was introduced to solve the acoustic issues. It significantly reduced the reverberation time and added a tranquil quality and brought a more comforting experience to the space.
Historic problem:
Starting in the 1920s, asbestos had become a prevailing material to replace animal hair in the mixture of plasters. Due to the sound-absorptive and lightweight qualities of asbestos, it was also commonly used in the composition of acoustic plasters. The application of this type of acoustic plaster to the ceiling is often known as the "popcorn ceiling" due to its aesthetic texture. However, asbestos introduced health-hazards to the acoustic plaster, for both the users of space and especially for the workers installing the plaster. This became a major health problem of early acoustic plasters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Field (computer science)**
Field (computer science):
In computer science, data that has several parts, known as a record, can be divided into fields (data fields). Relational databases arrange data as sets of database records, so called rows. Each record consists of several fields; the fields of all records form the columns.
Field (computer science):
Examples of fields: name, gender, hair colour. In object-oriented programming, a field (also called data member or member variable) is a particular piece of data encapsulated within a class or object. In the case of a regular field (also called instance variable), for each instance of the object there is an instance variable: for example, an Employee class has a Name field and there is one distinct name per employee. A static field (also called class variable) is one variable, which is shared by all instances. Fields are abstracted by properties, which allow them to be read and written as if they were fields, but these can be translated to getter and setter method calls.
Fixed length:
Fields that contain a fixed number of bits are known as fixed length fields. A four byte field for example may contain a 31 bit binary integer plus a sign bit (32 bits in all). A 30 byte name field may contain a person's name typically padded with blanks at the end.
The disadvantage of using fixed length fields is that some part of the field may be wasted but space is still required for the maximum length case. Also, where fields are omitted, padding for the missing fields is still required to maintain fixed start positions within a record for instance.
Variable length:
A variable length field is not always the same physical size. Such fields are nearly always used for text fields that can be large, or fields that vary greatly in length. For example, a bibliographical database like PubMed has many small fields such as publication date and author name, but also has abstracts, which vary greatly in length. Reserving a fixed-length field of some length would be inefficient because it would enforce a maximum length on abstracts, and because space would be wasted in most records (particularly if many articles lacked abstracts entirely).
Variable length:
Database implementations commonly store varying-length fields in special ways, in order to make all the records of a given type have a uniform small size. Doing so can help performance.
On the other hand, data in serialized forms such as stored in typical file systems, transmitted across networks, and so on usually uses quite different performance strategies.
The choice depends on factors such as the total size of records, performance characteristics of the storage medium, and the expected patterns of access.
Database implementations typically store variable length fields in ways such as a sequence of characters or bytes, followed by an end-marker that is prohibited within the string itself. This makes it slower to access later fields in the same record because the later fields are not always at the same physical distance from the start of the record.
a pointer to data in some other location, such as a URI, a file offset (and perhaps length), or a key identifying a record in some special place. This typically speeds up processes that do not need the contents of the variable length fields, but slows processes that do.
Variable length:
a length prefix followed by the specified number of characters or bytes. This avoids searches for an end-marker as in the first method, and avoids the loss of locality of reference as in the second method. On the other hand, it imposes a maximum length: the biggest number that can be represented using the (generally fixed length) prefix. In addition, records still vary in length, and must be traversed in order to reach later fields.If a varying-length field is often empty, additional optimizations come into play.
Example:
This Person java class has 3 fields: firstName, lastName, and heightInCentimeters. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Overlap–add method**
Overlap–add method:
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x[n] with a finite impulse response (FIR) filter h[n] where h[m] = 0 for m outside the region [1, M].
This article uses common abstract notations, such as {\textstyle y(t)=x(t)*h(t),} or {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants {\textstyle t} (see Convolution#Notation).
Overlap–add method:
The concept is to divide the problem into multiple convolutions of h[n] with short segments of x[n] otherwise , where L is an arbitrary segment length. Then: x[n]=∑kxk[n−kL], and y[n] can be written as a sum of short convolutions: y[n]=(∑kxk[n−kL])∗h[n]=∑k(xk[n−kL]∗h[n])=∑kyk[n−kL], where the linear convolution yk[n]≜xk[n]∗h[n] is zero outside the region [1, L + M − 1]. And for any parameter N≥L+M−1, it is equivalent to the N-point circular convolution of xk[n] with h[n] in the region [1, N]. The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and L is customarily chosen such that N = L+M-1 is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency.
Pseudocode:
The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end
Efficiency considerations:
When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when M=201 and N=1024, Eq.3 equals 13.67, whereas direct evaluation of Eq.1 would require up to 201 complex multiplications per output sample, the worst case being when both x and h are complex-valued. Also note that for any given M, Eq.3 has a minimum with respect to N. Figure 2 is a graph of the values of N that minimize Eq.3 for a range of filter lengths (M).
Efficiency considerations:
Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length Nx samples. The total number of complex multiplications would be: log 2(Nx)+1).
Comparatively, the number of complex multiplications required by the pseudocode algorithm is: log 2(N)+1)⋅NN−M+1.
Efficiency considerations:
Hence the cost of the overlap–add method scales almost as log 2N) while the cost of a single, large circular convolution is almost log 2Nx) . The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ski rental problem**
Ski rental problem:
In computer science, the ski rental problem is a name given to a class of problems in which there is a choice between continuing to pay a repeating cost or paying a one-time cost which eliminates or reduces the repeating cost.
The problem:
Many online problems have a sub-problem called the rent/buy problem. We need to decide whether to stay in the current state and pay a certain amount of cost per time unit, or switch to another state and pay some fixed large cost with no further payment. Ski rental is one example where the rent/buy is the entire problem. Its basic version is: A person is going skiing for an unknown number of days. Renting skis costs $1 per day and buying skis costs $10. Every day, the person must decide whether to continue renting skis for one more day or buy a pair of skis. If the person knows in advance how many days she will go skiing, she can decide her minimum cost. If she will be skiing for more than 10 days it will be cheaper to buy skis but if she will be skiing for fewer than 10 days it will be cheaper to rent. What should she do when she does not know in advance how many days she will ski?Formally, the problem can be set up as follows. There is a number of days d (unknown) that the person will ski. The goal is to find an algorithm that minimizes the ratio between what the person would pay when d is not known in advance and what the person would pay optimally if the person knew d in advance. The problem is generally analyzed in the worst case, where the algorithm is fixed and then we look at the worst-case performance of the algorithm over all possible d. In particular, no assumptions are made regarding the distribution of d (and it is easy to see that, with knowledge of the distribution of d, a different analysis as well as different solutions would be preferred).
The break-even algorithm:
The break-even algorithm instructs one to rent for 9 days and buy skis on the morning of day 10 if one is still up for skiing. If one has to stop skiing during the first 9 days, it costs the same as what one would pay if one had known the number of days one would go skiing. If one has to stop skiing after day 10, one's cost is $19 which is 90% more than what one would pay if one had known the number of days one would go skiing in advance. This is the worst case for the break-even algorithm.
The break-even algorithm:
The break-even algorithm is known to be the best deterministic algorithm for this problem.
The randomized algorithm:
A person can flip a coin. If it comes up heads, she buy skis on day eight; otherwise, she buys skis on day 10. This is an instance of a randomized algorithm. The expected cost is at most 80% more than what the person would pay if she had known the number of days she would go skiing, regardless of how many days she skis. In particular, if the person skis for 10 days, her expected cost is 1/2 [7 +10] + 1/2 [9+10] = 18 dollars, only 80% excess instead of 90%.
The randomized algorithm:
A randomized algorithm can be understood as a composition of different algorithms, each one which occurs with a given probability. We define the expected competitive ratio on a given instance i as: Ei=∑jP(ALGj)⋅ALGj(i) , where ALGj(i) is the competitive ratio for instance i, given ALGj . Consequently, the competitive ratio of a randomized algorithm is given by the worst value of Ei over all given instances. In the case of the coin flipping ski-rental, we note that the randomized algorithm has 2 possible branches: If the coin comes up heads, we buy on day 8, otherwise we buy on day 10. We may call the branches ALGheads and ALGtails , respectively. Ei=P(ALGheads)⋅ALGheads(i)+P(ALGtails)⋅ALGtails(i)=12⋅1+12⋅1=1 , for i<8 . 17 1.5625 , 17 1.444444 , and 17 10 19 10 1.8 , for 10 . Therefore, the competitive ratio of the randomized ski-rental coin flipping algorithm is 1.8.
The randomized algorithm:
The best randomized algorithm against an oblivious adversary is to choose some day i at random according to the following distribution p, rent for i-1 days and buy skis on the morning of day i if one are still up for skiing. Karlin et al. first presented this algorithm with distribution pi={(b−1b)b−i1b(1−(1−(1/b))b)i≤b0i>b, where buying skis costs $ b and renting costs $1. Its expected cost is at most e/(e-1) ≈ 1.58 times what one would pay if one had known the number of days one would go skiing. No randomized algorithm can do better.
Applications:
Snoopy caching: several caches share the same memory space that is partitioned into blocks. When a cache writes to a block, caches that share the block spend 1 bus cycle to get updated. These caches can invalidate the block to avoid the cost of updating. But there is a penalty of p bus cycles for invalidating a block from a cache that shortly thereafter needs access to it. We can break the write request sequences for several caches into request sequences for two caches. One cache performs a sequence of write operations to the block. The other cache needs to decide whether to get updated by paying 1 bus cycle per operation or invalidate the block by paying p bus cycles for future read request of itself. The two cache, one block snoopy caching problem is just the ski rental problem.TCP acknowledgment: A stream of packets arrive at a destination and are required by the TCP protocol to be acknowledged upon arrival. However, we can use a single acknowledgment packet to simultaneously acknowledge multiple outstanding packets, thereby reducing the overhead of the acknowledgments. On the other hand, delaying acknowledgments too much can interfere with the TCP's congestion control mechanisms, and thus we should not allow the latency between a packet's arrival time and the time at which the acknowledgment is sent to increase too much. Karlin et al. described a one-parameter family of inputs, called the basis inputs, and showed that when restricted to these basis inputs, the TCP acknowledgement problem behaves the same as the ski rental problem.Total completion time scheduling: We wish to schedule jobs with fixed processing times on m identical machines. The processing time of job j is pj. Each job becomes known to the scheduler on its release time rj. The goal is to minimize the sum of completion times over all jobs. A simplified problem is one single machine with the following input: at time 0, a job with processing time 1 arrives; k jobs with processing time 0 arrive at some unknown time. We need to choose a start time for the first job. Waiting incurs a cost of 1 per time unit, yet starting the first job before the later k jobs may incur an extra cost of k in the worst case. This simplified problem may be viewed as a continuous version of the ski rental problem.Refactoring versus working with a poor design: In software development, engineers have to choose between the friction and risk of errors of working with an overly-complex design and reducing the complexity of the design before making a change. The extra cost of each change with the old design is the "rental" cost, the cost of refactoring is the "buy" cost. "How many times does one work with a poor design before cleaning it up?" is a ski rental problem. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Climate of Europe**
Climate of Europe:
Europe is generally characterized by a temperate climate. Most of Western Europe has an Oceanic climate, in the Köppen climate classification, featuring cool to warm summers and cool winters with frequent overcast skies. Southern Europe has a distinctively Mediterranean climate, which features warm to hot, dry summers and cool to mild winters and frequent sunny skies. Central-eastern Europe is classified as having a humid continental climate, which features warm to hot summers and cold winters.
Climate of Europe:
Parts of the central European plains have a hybrid oceanic/continental climate. Four seasons occur in most of Europe away from the Mediterranean. The coastal lowlands of the Mediterranean Basin have more of a wet winter and dry summer season pattern, the winter season extends from October to February while the summer season is mainly noticeable in the dry months where precipitation can, in some years, become extremely scarce. A very small area in the continent features the desert climate, which exists in the south-eastern coasts of Spain making them the only places in Europe that have an arid climate.
Gulf Stream:
The climate of western Europe is strongly conditioned by the Gulf Stream, which keeps mild air (for the latitude) over Northwestern Europe in the winter months, especially in Ireland, the United Kingdom and coastal Norway. In terms of monthly sunshine averages, much of temperate Europe sees considerably less than the northern United States and eastern Asia. The climate of Western Europe is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream. Western Europe is at the same latitude as parts Canada and Russia, thus solar insulation is weak much of the year. Mediterranean waters are not as deep as the large oceans, allowing it to become a heat storage tempering winters along its coastlines, but because the Atlantic Ocean is largely influenced by the gulf stream, this effect is reduced when compared to that of the Atlantic waters. The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be.
Gulf Stream:
Compared to areas located in the higher middle latitudes, parts of western Europe have mild winters and higher annual temperatures (though summers are cooler than locations at the same latitude). Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (15 °F) higher than those in Calgary (although Calgary sits 1200m higher in altitude), and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.This difference is even larger on the northern part of the continent. The January average in Brønnøysund, Norway, is almost 15 °C warmer than the January average in Nome, Alaska, both towns are situated upwind on the west coast of the continents at 65°N, and as much as 42 °C warmer than the January average in Yakutsk which is actually slightly further south. Further south the oceanic climate of Europe compares thermally to North America, at around 48°N Rennes, France has about an equal average temperature throughout the year to Seattle, Washington, although the latter has drier summers with much wetter winters.Within mainland Spain, the arid climate appears predominantly in Almería. The desert climate extends to the Andarax and Almanzora river valleys, Alicante and the Cabo de Gata-Níjar Natural Park, which are also known for having also a hot desert climate (Köppen: BWh), with a precipitation amount of 156 mm (6.1 in) per year which is reportedly the driest place in Europe.
Temperature:
Most of Europe sees seasonal temperatures consistent with temperate climates in other parts of the world, though summers north of the Mediterranean Sea are cooler than most temperate climates experience in summer (for example summers in the temperate sector of the northern United States are much hotter in summer than Europe). Among the cities with a population over 100,000 people in Europe, the coldest winters are mostly found in Russia, with daily highs in winter averaging 0 °C (32 °F), while the mildest winters in the continent are in southern Portugal, southern Spain, in Sicily (Italy) and southern Greek islands such as Crete, Rhodes, Karpathos and Kasos.
Temperature:
Average annual temperatures vary from around −5 °C (23 °F) in Vorkuta, Russia up to almost 22 °C (72 °F) in Lindos, Greece.The hottest summers of the continent occur in cities and towns of the hinterland of southern Spain, southern Greece and southern Italy. July average highs in Spain are 36.9 °C (98.4 °F) in Cordoba and 36.0 °C (96.8 °F) in Seville. July and August highs in Greece average around 36.0 °C (96.8 °F) in Sparta. The highest extreme temperatures have been recorded in Syracuse, Italy, with 48.8 °C (119.8 °F), Athens and Elefsina, Greece, with 48.0 °C (118.4 °F) and inside the southern valleys of the Iberian Peninsula, with towns such as La Rambla, Cordoba (Spain) and Amareleja (Portugal) recording temperatures of 47.6 °C (117.7 °F) and 47.4 °C (117.3 °F) respectively.
Tornadoes:
The Netherlands has the highest average number of recorded tornadoes per area of any country in the world (more than 20, or 0.0005 per km2), annually), followed by the UK (around 33, or 0.0001 per km2), per year), but most are small and cause minor damage. In absolute number of events, ignoring area, the UK experiences more tornadoes than any other European country, excluding waterspouts. Europe uses its own tornado scale, known as the TORRO scale, which ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Sighting in**
Sighting in:
In ranged weapons such as firearms and artillery pieces, sighting in or sight-in is a preparatory or corrective calibration of the sights with the goal that the projectile (e.g. bullet or shell) may be placed at a predictable impact position within the sight picture. The principle of sighting-in is to shift the line of aim until it intersects the parabolic projectile trajectory at a designated point of reference, so when the gun is fired in the future (provided there is reliable precision) it will repeatably hit where it aims at identical distances of that designated point.
Sighting in:
Because when using a telescopic sight, the crosshair lines geometrically resemble the X- and Y-axis of the Cartesian coordinate system where the reticle center is analogous to the origin point (i.e. coordinate [0,0]), the designated sighting-in point is known as a zero, and the act of sighting-in is therefore also called zeroing. A gunsight that remains true to its designated zero after repeated usage is known as to "hold zero", while one that fails to do so is known as to "lose zero".
Sighting in:
The iterative procedure involves firing a group of shots from a cool gun barrel, then determining the geometric center of the shot pattern, adjusting the sights to move the point of aim to that group center, and repeating the process until further groups consistently center on the point of aim.
Grouping:
Bullets discharged from a firearm immobilized in a device such as a Mann rest may not always land in precisely the same spot. Some of that variation may be caused by wind conditions or ammunition differences, but individual firearms may have differing abilities to place bullets consistently. Bullet impact positions at a measured distance from the firearm muzzle are evaluated as shot groupings or groups. Each group consists of a given number of shots with increasing numbers of shots providing greater statistical confidence. Each group is described by the minimum diameter circle perpendicular to the axis of bullet movement including the impact point of all bullets in that group. A firearm consistently placing bullets within a 1 inch (25 mm) diameter circle on a target 100 yards (91 m) from the muzzle might be described as capable of 1-inch groups at 100 yards. Groups may alternatively be described by the angle of dispersion. A one-inch group at 100 yards is approximately equivalent to one minute of angle, indicating that firearm would be expected to place bullets within a two-inch group at 200 yards, or within a three-inch group at 300 yards.
Grouping:
Terminology may be confusing. Groups should not be confused with the patterns traditionally used to describe the positioning of a specified percentage of the multiple pellets from an individual shotgun shell.
Reasons for sighting in:
Firearms carried by individuals may be positioned differently from one shot to the next. Most firearms have sights to assist the shooter in positioning the firearm so bullets will strike the desired location. Precision machining used in manufacture of modern firearms and testing prior to distribution have improved the probability these sights will be correctly positioned; but various factors may cause bullet placement to be different from expected: Sights may have been loosened or moved from their intended positions since the last test firing.
Reasons for sighting in:
Optional telescopic sights may have replaced original iron sights.
The firearm may have been sighted in for a different target distance.
The shooter may be using different ammunition than used for previous testing.
The shooter may involuntarily move the firearm while pulling the trigger.
The shooter may hold the firearm in a way allowing unanticipated movement during recoil.
The shooter may have vision abnormalities producing an unanticipated sight picture.
Targets:
Sighting in a firearm is an important test of the ability of the firearm user to hit anticipated targets with available ammunition. Pictures or silhouettes of intended targets are less suitable for sighting in than high contrast shapes compatible with the type of sights on the firearm. Contrasting circles are commonly used as sighting in targets. Some targets include a faint grid for easier measurement of horizontal and vertical distance from point of aim. These circle targets are especially suitable for peep sights, aperture sights, dot reticles, and bead front sights; and are most useful when the apparent diameter of that sight feature matches the apparent diameter of the contrasting circle at the selected distance to target. Firearms with blade front sights and notch rear sights may reduce vertical dispersion by using a sight picture visually balancing the target's contrasting circle on a horizontal sight surface like the top of the blade or horizontal notched surface.
Procedure:
The diameter of the group for a single sight setting is irrelevant to the sighting in procedure so long as all bullet positions can be measured to determine average point of impact in comparison to point of aim. Larger diameter groups indicate reduced hit probability on smaller targets at that range and suggest groups with a larger number of shots may provide better estimates of required adjustments. Sighting in is most effective from a stable shooting position allowing the shooter to relax while the firearm is supported on a bench rest or on a sandbag or similar padding supported by a rock, log, or tree branch. Other stable shooting positions include sitting on the ground while leaning against a tree or structure and resting the firearm on an arm supported by the knees. The sights are examined prior to firing to be certain they are firmly fastened to the firearm and are not loose or moving between shots. Boresighting or firing single shots at a close range target may be required if shots do not hit the target at the desired distance.
Procedure:
After sights have been adjusted to reliably place bullets on target at the desired range, several shots are fired to form a group for measurement of averaged bullet placement. Each bullet position is measured horizontally and vertically from the point of aim, and sights are adjusted to compensate for the mean horizontal distance and mean vertical distance from the point of aim.
Procedure:
After the sights have been adjusted, more shots may be fired from a cool barrel forming another group to verify that sight adjustment moved the average bullet placement onto the point of aim. Sighting in has been completed when the group is centered on the point of aim. Bullets may then be fired at targets at different distances to determine trajectory differences from point of aim at those distances. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Musical theatre**
Musical theatre:
Musical theatre is a form of theatrical performance that combines songs, spoken dialogue, acting and dance. The story and emotional content of a musical – humor, pathos, love, anger – are communicated through words, music, movement and technical aspects of the entertainment as an integrated whole. Although musical theatre overlaps with other theatrical forms like opera and dance, it may be distinguished by the equal importance given to the music as compared with the dialogue, movement and other elements. Since the early 20th century, musical theatre stage works have generally been called, simply, musicals.
Musical theatre:
Although music has been a part of dramatic presentations since ancient times, modern Western musical theatre emerged during the 19th century, with many structural elements established by the works of Gilbert and Sullivan in Britain and those of Harrigan and Hart in America. These were followed by Edwardian musical comedies, which emerged in Britain, and the musical theatre works of American creators like George M. Cohan at the turn of the 20th century. The Princess Theatre musicals (1915–1918) were artistic steps forward beyond the revues and other frothy entertainments of the early 20th century and led to such groundbreaking works as Show Boat (1927), Of Thee I Sing (1931) and Oklahoma! (1943). Some of the most famous musicals through the decades that followed include My Fair Lady (1956), The Fantasticks (1960), Hair (1967), A Chorus Line (1975), Les Misérables (1985), The Phantom of the Opera (1986), Rent (1996), Wicked (2003) and Hamilton (2015).
Musical theatre:
Musicals are performed around the world. They may be presented in large venues, such as big-budget Broadway or West End productions in New York City or London. Alternatively, musicals may be staged in smaller venues, such as off-Broadway, off-off-Broadway, regional theatre, fringe theatre, or community theatre productions, or on tour. Musicals are often presented by amateur and school groups in churches, schools and other performance spaces. In addition to the United States and Britain, there are vibrant musical theatre scenes in continental Europe, Asia, Australasia, Canada and Latin America.
Definitions and scope:
Book musicals Since the 20th century, the "book musical" has been defined as a musical play where songs and dances are fully integrated into a well-made story with serious dramatic goals and which is able to evoke genuine emotions other than laughter. The three main components of a book musical are its music, lyrics and book. The book or script of a musical refers to the story, character development and dramatic structure, including the spoken dialogue and stage directions, but it can also refer to the dialogue and lyrics together, which are sometimes referred to as the libretto (Italian for "little book"). The music and lyrics together form the score of a musical and include songs, incidental music and musical scenes, which are "theatrical sequence[s] set to music, often combining song with spoken dialogue." The interpretation of a musical is the responsibility of its creative team, which includes a director, a musical director, usually a choreographer and sometimes an orchestrator. A musical's production is also creatively characterized by technical aspects, such as set design, costumes, stage properties (props), lighting and sound. The creative team, designs and interpretations generally change from the original production to succeeding productions. Some production elements, however, may be retained from the original production, for example, Bob Fosse's choreography in Chicago.
Definitions and scope:
There is no fixed length for a musical. While it can range from a short one-act entertainment to several acts and several hours in length (or even a multi-evening presentation), most musicals range from one and a half to three hours. Musicals are usually presented in two acts, with one short intermission, and the first act is frequently longer than the second. The first act generally introduces nearly all of the characters and most of the music and often ends with the introduction of a dramatic conflict or plot complication while the second act may introduce a few new songs but usually contains reprises of important musical themes and resolves the conflict or complication. A book musical is usually built around four to six main theme tunes that are reprised later in the show, although it sometimes consists of a series of songs not directly musically related. Spoken dialogue is generally interspersed between musical numbers, although "sung dialogue" or recitative may be used, especially in so-called "sung-through" musicals such as Jesus Christ Superstar, Falsettos, Les Misérables, Evita and Hamilton. Several shorter musicals on Broadway and in the West End in the 21st century have been presented in one act.
Definitions and scope:
Moments of greatest dramatic intensity in a book musical are often performed in song. Proverbially, "when the emotion becomes too strong for speech, you sing; when it becomes too strong for song, you dance." In a book musical, a song is ideally crafted to suit the character (or characters) and their situation within the story; although there have been times in the history of the musical (e.g. from the 1890s to the 1920s) when this integration between music and story has been tenuous. As The New York Times critic Ben Brantley described the ideal of song in theatre when reviewing the 2008 revival of Gypsy: "There is no separation at all between song and character, which is what happens in those uncommon moments when musicals reach upward to achieve their ideal reasons to be." Typically, many fewer words are sung in a five-minute song than are spoken in a five-minute block of dialogue. Therefore, there is less time to develop drama in a musical than in a straight play of equivalent length, since a musical usually devotes more time to music than to dialogue. Within the compressed nature of a musical, the writers must develop the characters and the plot.
Definitions and scope:
The material presented in a musical may be original, or it may be adapted from novels (Wicked and Man of La Mancha), plays (Hello, Dolly! and Carousel), classic legends (Camelot), historical events (Evita and Hamilton) or films (The Producers and Billy Elliot). On the other hand, many successful musical theatre works have been adapted for musical films, such as West Side Story, My Fair Lady, The Sound of Music, Oliver! and Chicago.
Definitions and scope:
Comparisons with opera Musical theatre is closely related to the theatrical form of opera, but the two are usually distinguished by weighing a number of factors. First, musicals generally have a greater focus on spoken dialogue. Some musicals, however, are entirely accompanied and sung-through, while some operas, such as Die Zauberflöte, and most operettas, have some unaccompanied dialogue. Second, musicals usually include more dancing as an essential part of the storytelling, particularly by the principal performers as well as the chorus. Third, musicals often use various genres of popular music or at least popular singing and musical styles.Finally, musicals usually avoid certain operatic conventions. In particular, a musical is almost always performed in the language of its audience. Musicals produced on Broadway or in the West End, for instance, are invariably sung in English, even if they were originally written in another language. While an opera singer is primarily a singer and only secondarily an actor (and rarely needs to dance), a musical theatre performer is often an actor first but must also be a singer and dancer. Someone who is equally accomplished at all three is referred to as a "triple threat". Composers of music for musicals often consider the vocal demands of roles with musical theatre performers in mind. Today, large theatres that stage musicals generally use microphones and amplification of the actors' singing voices in a way that would generally be disapproved of in an operatic context.Some works, including those by George Gershwin, Leonard Bernstein and Stephen Sondheim, have been made into both musical theatre and operatic productions. Similarly, some older operettas or light operas (such as The Pirates of Penzance by Gilbert and Sullivan) have been produced in modern adaptations that treat them as musicals. For some works, production styles are almost as important as the work's musical or dramatic content in defining into which art form the piece falls. Sondheim said, "I really think that when something plays Broadway it's a musical, and when it plays in an opera house it's opera. That's it. It's the terrain, the countryside, the expectations of the audience that make it one thing or another." There remains an overlap in form between lighter operatic forms and more musically complex or ambitious musicals. In practice, it is often difficult to distinguish among the various kinds of musical theatre, including "musical play", "musical comedy", "operetta" and "light opera".Like opera, the singing in musical theatre is generally accompanied by an instrumental ensemble called a pit orchestra, located in a lowered area in front of the stage. While opera typically uses a conventional symphony orchestra, musicals are generally orchestrated for ensembles ranging from 27 players down to only a few players. Rock musicals usually employ a small group of mostly rock instruments, and some musicals may call for only a piano or two instruments. The music in musicals uses a range of "styles and influences including operetta, classical techniques, folk music, jazz [and] local or historical styles [that] are appropriate to the setting." Musicals may begin with an overture played by the orchestra that "weav[es] together excerpts of the score's famous melodies." Eastern traditions and other forms There are various Eastern traditions of theatre that include music, such as Chinese opera, Taiwanese opera, Japanese Noh and Indian musical theatre, including Sanskrit drama, Indian classical dance, Parsi theatre and Yakshagana. India has, since the 20th century, produced numerous musical films, referred to as "Bollywood" musicals, and in Japan a series of 2.5D musicals based on popular anime and manga comics has developed in recent decades.
Definitions and scope:
Shorter or simplified "junior" versions of many musicals are available for schools and youth groups, and very short works created or adapted for performance by children are sometimes called minimusicals.
History:
Early antecedents The antecedents of musical theatre in Europe can be traced back to the theatre of ancient Greece, where music and dance were included in stage comedies and tragedies during the 5th century BCE. The music from the ancient forms is lost, however, and they had little influence on later development of musical theatre. In the 12th and 13th centuries, religious dramas taught the liturgy. Groups of actors would use outdoor Pageant wagons (stages on wheels) to tell each part of the story. Poetic forms sometimes alternated with the prose dialogues, and liturgical chants gave way to new melodies.The European Renaissance saw older forms evolve into two antecedents of musical theatre: commedia dell'arte, where raucous clowns improvised familiar stories, and later, opera buffa. In England, Elizabethan and Jacobean plays frequently included music, and short musical plays began to be included in an evenings' dramatic entertainments. Court masques developed during the Tudor period that involved music, dancing, singing and acting, often with expensive costumes and a complex stage design. These developed into sung plays that are recognizable as English operas, the first usually being thought of as The Siege of Rhodes (1656). In France, meanwhile, Molière turned several of his farcical comedies into musical entertainments with songs (music provided by Jean-Baptiste Lully) and dance in the late 17th century. These influenced a brief period of English opera by composers such as John Blow and Henry Purcell.From the 18th century, the most popular forms of musical theatre in Britain were ballad operas, like John Gay's The Beggar's Opera, that included lyrics written to the tunes of popular songs of the day (often spoofing opera), and later pantomime, which developed from commedia dell'arte, and comic opera with mostly romantic plot lines, like Michael Balfe's The Bohemian Girl (1845). Meanwhile, on the continent, singspiel, comédie en vaudeville, opéra comique, zarzuela and other forms of light musical entertainment were emerging. The Beggar's Opera was the first recorded long-running play of any kind, running for 62 successive performances in 1728. It would take almost a century afterwards before any play broke 100 performances, but the record soon reached 150 in the late 1820s. Other musical theatre forms developed in England by the 19th century, such as music hall, melodrama and burletta, which were popularized partly because most London theatres were licensed only as music halls and not allowed to present plays without music.
History:
Colonial America did not have a significant theatre presence until 1752, when London entrepreneur William Hallam sent a company of actors to the colonies managed by his brother Lewis. In New York in the summer of 1753, they performed ballad-operas, such as The Beggar's Opera, and ballad-farces. By the 1840s, P. T. Barnum was operating an entertainment complex in lower Manhattan. Other early musical theatre in America consisted of British forms, such as burletta and pantomime, but what a piece was called did not necessarily define what it was. The 1852 Broadway extravaganza The Magic Deer advertised itself as "A Serio Comico Tragico Operatical Historical Extravaganzical Burletical Tale of Enchantment." Theatre in New York moved from downtown gradually to midtown from around 1850 and did not arrive in the Times Square area until the 1920s and 1930s. New York runs lagged far behind those in London, but Laura Keene's "musical burletta" Seven Sisters (1860) shattered previous New York musical theatre record, with a run of 253 performances.
History:
1850s to 1880s Around 1850, the French composer Hervé was experimenting with a form of comic musical theatre he called opérette. The best known composers of operetta were Jacques Offenbach from the 1850s to the 1870s and Johann Strauss II in the 1870s and 1880s. Offenbach's fertile melodies, combined with his librettists' witty satire, formed a model for the musical theatre that followed. Adaptations of the French operettas (played in mostly bad, risqué translations), musical burlesques, music hall, pantomime and burletta dominated the London musical stage into the 1870s.In America, mid-19th century musical theatre entertainments included crude variety revue, which eventually developed into vaudeville, minstrel shows, which soon crossed the Atlantic to Britain, and Victorian burlesque, first popularized in the US by British troupes. A hugely successful musical that premiered in New York in 1866, The Black Crook, was an original musical theatre piece that conformed to many of the modern definitions of a musical, including dance and original music that helped to tell the story. The spectacular production, famous for its skimpy costumes, ran for a record-breaking 474 performances. The same year, The Black Domino/Between You, Me and the Post was the first show to call itself a "musical comedy." Comedians Edward Harrigan and Tony Hart produced and starred in musicals on Broadway between 1878 (The Mulligan Guard Picnic) and 1885. These musical comedies featured characters and situations taken from the everyday life of New York's lower classes and represented a significant step forward towards a more legitimate theatrical form. They starred high quality singers (Lillian Russell, Vivienne Segal and Fay Templeton) instead of the ladies of questionable repute who had starred in earlier musical forms.
History:
As transportation improved, poverty in London and New York diminished, and street lighting made for safer travel at night, the number of patrons for the growing number of theatres increased enormously. Plays ran longer, leading to better profits and improved production values, and men began to bring their families to the theatre. The first musical theatre piece to exceed 500 consecutive performances was the French operetta The Chimes of Normandy in 1878 (705 performances). English comic opera adopted many of the successful ideas of European operetta, none more successfully than the series of more than a dozen long-running Gilbert and Sullivan comic operas, including H.M.S. Pinafore (1878) and The Mikado (1885). These were sensations on both sides of the Atlantic and in Australia and helped to raise the standard for what was considered a successful show. These shows were designed for family audiences, a marked contrast from the risqué burlesques, bawdy music hall shows and French operettas that sometimes drew a crowd seeking less wholesome entertainment. Only a few 19th-century musical pieces exceeded the run of The Mikado, such as Dorothy, which opened in 1886 and set a new record with a run of 931 performances. Gilbert and Sullivan's influence on later musical theatre was profound, creating examples of how to "integrate" musicals so that the lyrics and dialogue advanced a coherent story. Their works were admired and copied by early authors and composers of musicals in Britain and America.
History:
1890s to the new century A Trip to Chinatown (1891) was Broadway's long-run champion (until Irene in 1919), running for 657 performances, but New York runs continued to be relatively short, with a few exceptions, compared with London runs, until the 1920s. Gilbert and Sullivan were widely pirated and also were imitated in New York by productions such as Reginald De Koven's Robin Hood (1891) and John Philip Sousa's El Capitan (1896). A Trip to Coontown (1898) was the first musical comedy entirely produced and performed by African Americans on Broadway (largely inspired by the routines of the minstrel shows), followed by ragtime-tinged shows. Hundreds of musical comedies were staged on Broadway in the 1890s and early 20th century, composed of songs written in New York's Tin Pan Alley, including those by George M. Cohan, who worked to create an American style distinct from the Gilbert and Sullivan works. The most successful New York shows were often followed by extensive national tours.Meanwhile, musicals took over the London stage in the Gay Nineties, led by producer George Edwardes, who perceived that audiences wanted a new alternative to the Savoy-style comic operas and their intellectual, political, absurdist satire. He experimented with a modern-dress, family-friendly musical theatre style, with breezy, popular songs, snappy, romantic banter, and stylish spectacle at the Gaiety and his other theatres. These drew on the traditions of comic opera and used elements of burlesque and of the Harrigan and Hart pieces. He replaced the bawdy women of burlesque with his "respectable" corps of Gaiety Girls to complete the musical and visual fun. The success of the first of these, In Town (1892) and A Gaiety Girl (1893) set the style for the next three decades. The plots were generally light, romantic "poor maiden loves aristocrat and wins him against all odds" shows, with music by Ivan Caryll, Sidney Jones and Lionel Monckton. These shows were immediately widely copied in America, and Edwardian musical comedy swept away the earlier musical forms of comic opera and operetta. The Geisha (1896) was one of the most successful in the 1890s, running for more than two years and achieving great international success.
History:
The Belle of New York (1898) became the first American musical to run for over a year in London. The British musical comedy Florodora (1899) was a popular success on both sides of the Atlantic, as was A Chinese Honeymoon (1901), which ran for a record-setting 1,074 performances in London and 376 in New York. After the turn of the 20th century, Seymour Hicks joined forces with Edwardes and American producer Charles Frohman to create another decade of popular shows. Other enduring Edwardian musical comedy hits included The Arcadians (1909) and The Quaker Girl (1910).
History:
Early 20th century Virtually eliminated from the English-speaking stage by competition from the ubiquitous Edwardian musical comedies, operettas returned to London and Broadway in 1907 with The Merry Widow, and adaptations of continental operettas became direct competitors with musicals. Franz Lehár and Oscar Straus composed new operettas that were popular in English until World War I. In America, Victor Herbert produced a string of enduring operettas including The Fortune Teller (1898), Babes in Toyland (1903), Mlle. Modiste (1905), The Red Mill (1906) and Naughty Marietta (1910).
History:
In the 1910s, the team of P. G. Wodehouse, Guy Bolton and Jerome Kern, following in the footsteps of Gilbert and Sullivan, created the "Princess Theatre shows" and paved the way for Kern's later work by showing that a musical could combine light, popular entertainment with continuity between its story and songs. Historian Gerald Bordman wrote: These shows built and polished the mold from which almost all later major musical comedies evolved. ... The characters and situations were, within the limitations of musical comedy license, believable and the humor came from the situations or the nature of the characters. Kern's exquisitely flowing melodies were employed to further the action or develop characterization. ... [Edwardian] musical comedy was often guilty of inserting songs in a hit-or-miss fashion. The Princess Theatre musicals brought about a change in approach. P. G. Wodehouse, the most observant, literate and witty lyricist of his day, and the team of Bolton, Wodehouse and Kern had an influence felt to this day.
History:
The theatre-going public needed escapist entertainment during the dark times of World War I, and they flocked to the theatre. The 1919 hit musical Irene ran for 670 performances, a Broadway record that held until 1938. The British theatre public supported far longer runs like that of The Maid of the Mountains (1,352 performances) and especially Chu Chin Chow. Its run of 2,238 performances was more than twice as long as any previous musical, setting a record that stood for nearly forty years. Even a revival of The Beggar's Opera held the stage for 1,463 performances. Revues like The Bing Boys Are Here in Britain, and those of Florenz Ziegfeld and his imitators in America, were also extraordinarily popular.
History:
The musicals of the Roaring Twenties, borrowing from vaudeville, music hall and other light entertainments, tended to emphasize big dance routines and popular songs at the expense of plot. Typical of the decade were lighthearted productions like Sally; Lady, Be Good; No, No, Nanette; Oh, Kay!; and Funny Face. Despite forgettable stories, these musicals featured stars such as Marilyn Miller and Fred Astaire and produced dozens of enduring popular songs by Kern, George and Ira Gershwin, Irving Berlin, Cole Porter and Rodgers and Hart. Popular music was dominated by musical theatre standards, such as "Fascinating Rhythm", "Tea for Two" and "Someone to Watch Over Me". Many shows were revues, series of sketches and songs with little or no connection between them. The best-known of these were the annual Ziegfeld Follies, spectacular song-and-dance revues on Broadway featuring extravagant sets, elaborate costumes and beautiful chorus girls. These spectacles also raised production values, and mounting a musical generally became more expensive. Shuffle Along (1921), an all-African American show was a hit on Broadway. A new generation of composers of operettas also emerged in the 1920s, such as Rudolf Friml and Sigmund Romberg, to create a series of popular Broadway hits.In London, writer-stars such as Ivor Novello and Noël Coward became popular, but the primacy of British musical theatre from the 19th century through 1920 was gradually replaced by American innovation, especially after World War I, as Kern and other Tin Pan Alley composers began to bring new musical styles such as ragtime and jazz to the theatres, and the Shubert Brothers took control of the Broadway theatres. Musical theatre writer Andrew Lamb notes, "The operatic and theatrical styles of nineteenth-century social structures were replaced by a musical style more aptly suited to twentieth-century society and its vernacular idiom. It was from America that the more direct style emerged, and in America that it was able to flourish in a developing society less hidebound by nineteenth-century tradition." In France, comédie musicale was written between in the early decades of the century for such stars as Yvonne Printemps.
History:
Show Boat and the Great Depression Progressing far beyond the comparatively frivolous musicals and sentimental operettas of the decade, Broadway's Show Boat (1927), represented an even more complete integration of book and score than the Princess Theatre musicals, with dramatic themes told through the music, dialogue, setting and movement. This was accomplished by combining the lyricism of Kern's music with the skillful libretto of Oscar Hammerstein II. One historian wrote, "Here we come to a completely new genre – the musical play as distinguished from musical comedy. Now ... everything else was subservient to that play. Now ... came complete integration of song, humor and production numbers into a single and inextricable artistic entity." As the Great Depression set in during the post-Broadway national tour of Show Boat, the public turned back to mostly light, escapist song-and-dance entertainment. Audiences on both sides of the Atlantic had little money to spend on entertainment, and only a few stage shows anywhere exceeded a run of 500 performances during the decade. The revue The Band Wagon (1931) starred dancing partners Fred Astaire and his sister Adele, while Porter's Anything Goes (1934) confirmed Ethel Merman's position as the First Lady of musical theatre, a title she maintained for many years. Coward and Novello continued to deliver old fashioned, sentimental musicals, such as The Dancing Years, while Rodgers and Hart returned from Hollywood to create a series of successful Broadway shows, including On Your Toes (1936, with Ray Bolger, the first Broadway musical to make dramatic use of classical dance), Babes in Arms (1937) and The Boys from Syracuse (1938). Porter added Du Barry Was a Lady (1939). The longest-running piece of musical theatre of the 1930s in the US was Hellzapoppin (1938), a revue with audience participation, which played for 1,404 performances, setting a new Broadway record. In Britain, Me and My Girl ran for 1,646 performances.Still, a few creative teams began to build on Show Boat's innovations. Of Thee I Sing (1931), a political satire by the Gershwins, was the first musical awarded the Pulitzer Prize. As Thousands Cheer (1933), a revue by Irving Berlin and Moss Hart in which each song or sketch was based on a newspaper headline, marked the first Broadway show in which an African-American, Ethel Waters, starred alongside white actors. Waters' numbers included "Supper Time", a woman's lament for her husband who has been lynched. The Gershwins' Porgy and Bess (1935) featured an all African-American cast and blended operatic, folk and jazz idioms. The Cradle Will Rock (1937), directed by Orson Welles, was a highly political pro-union piece that, despite the controversy surrounding it, ran for 108 performances. Rodgers and Hart's I'd Rather Be Right (1937) was a political satire with George M. Cohan as President Franklin D. Roosevelt, and Kurt Weill's Knickerbocker Holiday depicted New York City's early history while good-naturedly satirizing Roosevelt's good intentions.
History:
The motion picture mounted a challenge to the stage. Silent films had presented only limited competition, but by the end of the 1920s, films like The Jazz Singer could be presented with synchronized sound. "Talkie" films at low prices effectively killed off vaudeville by the early 1930s. Despite the economic woes of the 1930s and the competition from film, the musical survived. In fact, it continued to evolve thematically beyond the gags and showgirls musicals of the Gay Nineties and Roaring Twenties and the sentimental romance of operetta, adding technical expertise and the fast-paced staging and naturalistic dialogue style led by director George Abbott.
History:
The Golden Age (1940s to 1960s) 1940s The 1940s would begin with more hits from Porter, Irving Berlin, Rodgers and Hart, Weill and Gershwin, some with runs over 500 performances as the economy rebounded, but artistic change was in the air.
History:
Rodgers and Hammerstein's Oklahoma! (1943) completed the revolution begun by Show Boat, by tightly integrating all the aspects of musical theatre, with a cohesive plot, songs that furthered the action of the story, and featured dream ballets and other dances that advanced the plot and developed the characters, rather than using dance as an excuse to parade scantily clad women across the stage. Rodgers and Hammerstein hired ballet choreographer Agnes de Mille, who used everyday motions to help the characters express their ideas. It defied musical conventions by raising its first act curtain not on a bevy of chorus girls, but rather on a woman churning butter, with an off-stage voice singing the opening lines of Oh, What a Beautiful Mornin' unaccompanied. It drew rave reviews, set off a box-office frenzy and received a Pulitzer Prize. Brooks Atkinson wrote in The New York Times that the show's opening number changed the history of musical theater: "After a verse like that, sung to a buoyant melody, the banalities of the old musical stage became intolerable." It was the first "blockbuster" Broadway show, running a total of 2,212 performances, and was made into a hit film. It remains one of the most frequently produced of the team's projects. William A. Everett and Paul R. Laird wrote that this was a "show, that, like Show Boat, became a milestone, so that later historians writing about important moments in twentieth-century theatre would begin to identify eras according to their relationship to Oklahoma!".
History:
"After Oklahoma!, Rodgers and Hammerstein were the most important contributors to the musical-play form... The examples they set in creating vital plays, often rich with social thought, provided the necessary encouragement for other gifted writers to create musical plays of their own". The two collaborators created an extraordinary collection of some of musical theatre's best loved and most enduring classics, including Carousel (1945), South Pacific (1949), The King and I (1951) and The Sound of Music (1959). Some of these musicals treat more serious subject matter than most earlier shows: the villain in Oklahoma! is a suspected murderer and psychopath; Carousel deals with spousal abuse, thievery, suicide and the afterlife; South Pacific explores miscegenation even more thoroughly than Show Boat; the hero of The King and I dies onstage; and the backdrop of The Sound of Music is the annexation of Austria by Nazi Germany in 1938.
History:
The show's creativity stimulated Rodgers and Hammerstein's contemporaries and ushered in the "Golden Age" of American musical theatre. Americana was displayed on Broadway during the "Golden Age", as the wartime cycle of shows began to arrive. An example of this is On the Town (1944), written by Betty Comden and Adolph Green, composed by Leonard Bernstein and choreographed by Jerome Robbins. The story is set during wartime and concerns three sailors who are on a 24-hour shore leave in New York City, during which each falls in love. The show also gives the impression of a country with an uncertain future, as the sailors and their women also have. Irving Berlin used sharpshooter Annie Oakley's career as a basis for his Annie Get Your Gun (1946, 1,147 performances); Burton Lane, E. Y. Harburg and Fred Saidy combined political satire with Irish whimsy for their fantasy Finian's Rainbow (1947, 725 performances); and Cole Porter found inspiration in William Shakespeare's The Taming of the Shrew for Kiss Me, Kate (1948, 1,077 performances). The American musicals overwhelmed the old-fashioned British Coward/Novello-style shows, one of the last big successes of which was Novello's Perchance to Dream (1945, 1,021 performances). The formula for the Golden Age musicals reflected one or more of four widely held perceptions of the "American dream": That stability and worth derives from a love relationship sanctioned and restricted by Protestant ideals of marriage; that a married couple should make a moral home with children away from the city in a suburb or small town; that the woman's function was as homemaker and mother; and that Americans incorporate an independent and pioneering spirit or that their success is self-made.
History:
1950s The 1950s were crucial to the development of the American musical. Damon Runyon's eclectic characters were at the core of Frank Loesser's and Abe Burrows' Guys and Dolls, (1950, 1,200 performances); and the Gold Rush was the setting for Alan Jay Lerner and Frederick Loewe's Paint Your Wagon (1951). The relatively brief seven-month run of that show didn't discourage Lerner and Loewe from collaborating again, this time on My Fair Lady (1956), an adaptation of George Bernard Shaw's Pygmalion starring Rex Harrison and Julie Andrews, which at 2,717 performances held the long-run record for many years. Popular Hollywood films were made of all of these musicals. Two hits by British creators in this decade were The Boy Friend (1954), which ran for 2,078 performances in London and marked Andrews' American debut, and Salad Days (1954), which broke the British long-run record with a run of 2,283 performances.Another record was set by The Threepenny Opera, which ran for 2,707 performances, becoming the longest-running off-Broadway musical until The Fantasticks. The production also broke ground by showing that musicals could be profitable off-Broadway in a small-scale, small orchestra format. This was confirmed in 1959 when a revival of Jerome Kern and P. G. Wodehouse's Leave It to Jane ran for more than two years. The 1959–1960 off-Broadway season included a dozen musicals and revues including Little Mary Sunshine, The Fantasticks and Ernest in Love, a musical adaptation of Oscar Wilde's 1895 hit The Importance of Being Earnest.
History:
West Side Story (1957) transported Romeo and Juliet to modern day New York City and converted the feuding Montague and Capulet families into opposing ethnic gangs, the Jets and the Sharks. The book was adapted by Arthur Laurents, with music by Leonard Bernstein and lyrics by newcomer Stephen Sondheim. It was embraced by the critics, but failed to be a popular choice for the "blue-haired matinee ladies", who preferred the small town River City, Iowa of Meredith Willson's The Music Man (1957) to the alleys of Manhattan's Upper West Side. Apparently Tony Award voters were of a similar mind, since they favored the former over the latter. West Side Story had a respectable run of 732 performances (1,040 in the West End), while The Music Man ran nearly twice as long, with 1,375 performances. However, the 1961 film of West Side Story was extremely successful. Laurents and Sondheim teamed up again for Gypsy (1959, 702 performances), with Jule Styne providing the music for a backstage story about the most driven stage mother of all-time, stripper Gypsy Rose Lee's mother Rose. The original production ran for 702 performances, and was given four subsequent revivals, with Angela Lansbury, Tyne Daly, Bernadette Peters and Patti LuPone later tackling the role made famous by Ethel Merman.
History:
Although directors and choreographers have had a major influence on musical theatre style since at least the 19th century, George Abbott and his collaborators and successors took a central role in integrating movement and dance fully into musical theatre productions in the Golden Age. Abbott introduced ballet as a story-telling device in On Your Toes in 1936, which was followed by Agnes de Mille's ballet and choreography in Oklahoma!. After Abbott collaborated with Jerome Robbins in On the Town and other shows, Robbins combined the roles of director and choreographer, emphasizing the story-telling power of dance in West Side Story, A Funny Thing Happened on the Way to the Forum (1962) and Fiddler on the Roof (1964). Bob Fosse choreographed for Abbott in The Pajama Game (1956) and Damn Yankees (1957), injecting playful sexuality into those hits. He was later the director-choreographer for Sweet Charity (1968), Pippin (1972) and Chicago (1975). Other notable director-choreographers have included Gower Champion, Tommy Tune, Michael Bennett, Gillian Lynne and Susan Stroman. Prominent directors have included Hal Prince, who also got his start with Abbott, and Trevor Nunn.During the Golden Age, automotive companies and other large corporations began to hire Broadway talent to write corporate musicals, private shows only seen by their employees or customers. The 1950s ended with Rodgers and Hammerstein's last hit, The Sound of Music, which also became another hit for Mary Martin. It ran for 1,443 performances and shared the Tony Award for Best Musical. Together with its extremely successful 1965 film version, it has become one of the most popular musicals in history.
History:
1960s In 1960, The Fantasticks was first produced off-Broadway. This intimate allegorical show would quietly run for over 40 years at the Sullivan Street Theatre in Greenwich Village, becoming by far the longest-running musical in history. Its authors produced other innovative works in the 1960s, such as Celebration and I Do! I Do!, the first two-character Broadway musical. The 1960s would see a number of blockbusters, like Fiddler on the Roof (1964; 3,242 performances), Hello, Dolly! (1964; 2,844 performances), Funny Girl (1964; 1,348 performances) and Man of La Mancha (1965; 2,328 performances), and some more risqué pieces like Cabaret, before ending with the emergence of the rock musical. In Britain, Oliver! (1960) ran for 2,618 performances, but the long-run champion of the decade was The Black and White Minstrel Show (1962), which played for 4,344 performances. Two men had considerable impact on musical theatre history beginning in this decade: Stephen Sondheim and Jerry Herman.
History:
The first project for which Sondheim wrote both music and lyrics was A Funny Thing Happened on the Way to the Forum (1962, 964 performances), with a book based on the works of Plautus by Burt Shevelove and Larry Gelbart, starring Zero Mostel. Sondheim moved the musical beyond its concentration on the romantic plots typical of earlier eras; his work tended to be darker, exploring the grittier sides of life both present and past. Other early Sondheim works include Anyone Can Whistle (1964, which ran only nine performances, despite having stars Lee Remick and Angela Lansbury), and the successful Company (1970), Follies (1971) and A Little Night Music (1973). Later, Sondheim found inspiration in unlikely sources: the opening of Japan to Western trade for Pacific Overtures (1976), a legendary murderous barber seeking revenge in the Industrial Age of London for Sweeney Todd (1979), the paintings of Georges Seurat for Sunday in the Park with George (1984), fairy tales for Into the Woods (1987), and a collection of presidential assassins in Assassins (1990).
History:
While some critics have argued that some of Sondheim's musicals lack commercial appeal, others have praised their lyrical sophistication and musical complexity, as well as the interplay of lyrics and music in his shows. Some of Sondheim's notable innovations include a show presented in reverse (Merrily We Roll Along) and the above-mentioned Anyone Can Whistle, in which the first act ends with the cast informing the audience that they are mad.
History:
Jerry Herman played a significant role in American musical theatre, beginning with his first Broadway production, Milk and Honey (1961, 563 performances), about the founding of the state of Israel, and continuing with the blockbuster hits Hello, Dolly! (1964, 2,844 performances), Mame (1966, 1,508 performances), and La Cage aux Folles (1983, 1,761 performances). Even his less successful shows like Dear World (1969) and Mack and Mabel (1974) have had memorable scores (Mack and Mabel was later reworked into a London hit). Writing both words and music, many of Herman's show tunes have become popular standards, including "Hello, Dolly!", "We Need a Little Christmas", "I Am What I Am", "Mame", "The Best of Times", "Before the Parade Passes By", "Put On Your Sunday Clothes", "It Only Takes a Moment", "Bosom Buddies" and "I Won't Send Roses", recorded by such artists as Louis Armstrong, Eydie Gormé, Barbra Streisand, Petula Clark and Bernadette Peters. Herman's songbook has been the subject of two popular musical revues, Jerry's Girls (Broadway, 1985) and Showtune (off-Broadway, 2003).
History:
The musical started to diverge from the relatively narrow confines of the 1950s. Rock music would be used in several Broadway musicals, beginning with Hair, which featured not only rock music but also nudity and controversial opinions about the Vietnam War, race relations and other social issues.
History:
Social themes After Show Boat and Porgy and Bess, and as the struggle in America and elsewhere for minorities' civil rights progressed, Hammerstein, Harold Arlen, Yip Harburg and others were emboldened to write more musicals and operas that aimed to normalize societal toleration of minorities and urged racial harmony. Early Golden Age works that focused on racial tolerance included Finian's Rainbow and South Pacific. Towards the end of the Golden Age, several shows tackled Jewish subjects and issues, such as Fiddler on the Roof, Milk and Honey, Blitz! and later Rags. The original concept that became West Side Story was set in the Lower East Side during Easter-Passover celebrations; the rival gangs were to be Jewish and Italian Catholic. The creative team later decided that the Polish (white) vs. Puerto Rican conflict was fresher.Tolerance as an important theme in musicals has continued in recent decades. The final expression of West Side Story left a message of racial tolerance. By the end of the 1960s, musicals became racially integrated, with black and white cast members even covering each other's roles, as they did in Hair. Homosexuality has also been explored in musicals, starting with Hair, and even more overtly in La Cage aux Folles, Falsettos, Rent, Hedwig and the Angry Inch and other shows in recent decades. Parade is a sensitive exploration of both anti-Semitism and historical American racism, and Ragtime similarly explores the experience of immigrants and minorities in America.
History:
1970s to present 1970s After the success of Hair, rock musicals flourished in the 1970s, with Jesus Christ Superstar, Godspell, The Rocky Horror Show, Evita and Two Gentlemen of Verona. Some of those began as "concept albums" which were then adapted to the stage, most notably Jesus Christ Superstar and Evita. Others had no dialogue or were otherwise reminiscent of opera, with dramatic, emotional themes; these sometimes started as concept albums and were referred to as rock operas. Shows like Raisin, Dreamgirls, Purlie and The Wiz brought a significant African-American influence to Broadway. More varied musical genres and styles were incorporated into musicals both on and especially off-Broadway. At the same time, Stephen Sondheim found success with some of his musicals, as mentioned above.
History:
In 1975, the dance musical A Chorus Line emerged from recorded group therapy-style sessions Michael Bennett conducted with "gypsies" – those who sing and dance in support of the leading players – from the Broadway community. From hundreds of hours of tapes, James Kirkwood Jr. and Nick Dante fashioned a book about an audition for a musical, incorporating many real-life stories from the sessions; some who attended the sessions eventually played variations of themselves or each other in the show. With music by Marvin Hamlisch and lyrics by Edward Kleban, A Chorus Line first opened at Joseph Papp's Public Theater in lower Manhattan. What initially had been planned as a limited engagement eventually moved to the Shubert Theatre on Broadway for a run of 6,137 performances, becoming the longest-running production in Broadway history up to that time. The show swept the Tony Awards and won the Pulitzer Prize, and its hit song, What I Did for Love, became a standard.Broadway audiences welcomed musicals that varied from the golden age style and substance. John Kander and Fred Ebb explored the rise of Nazism in Germany in Cabaret, and murder and the media in Prohibition-era Chicago, which relied on old vaudeville techniques. Pippin, by Stephen Schwartz, was set in the days of Charlemagne. Federico Fellini's autobiographical film 8½ became Maury Yeston's Nine. At the end of the decade, Evita and Sweeney Todd were precursors of the darker, big budget musicals of the 1980s that depended on dramatic stories, sweeping scores and spectacular effects. At the same time, old-fashioned values were still embraced in such hits as Annie, 42nd Street, My One and Only, and popular revivals of No, No, Nanette and Irene. Although many film versions of musicals were made in the 1970s, few were critical or box office successes, with the notable exceptions of Fiddler on the Roof, Cabaret and Grease.
History:
1980s The 1980s saw the influence of European "megamusicals" on Broadway, in the West End and elsewhere. These typically feature a pop-influenced score, large casts and spectacular sets and special effects – a falling chandelier (in The Phantom of the Opera); a helicopter landing on stage (in Miss Saigon) – and big budgets. Some were based on novels or other works of literature. The British team of composer Andrew Lloyd Webber and producer Cameron Mackintosh started the megamusical phenomenon with their 1981 musical Cats, based on the poems of T. S. Eliot, which overtook A Chorus Line to become the longest-running Broadway show. Lloyd Webber followed up with Starlight Express (1984), performed on roller skates; The Phantom of the Opera (1986; also with Mackintosh), derived from the novel of the same name; and Sunset Boulevard (1993), from the 1950 film of the same name. Phantom would surpass Cats to become the longest-running show in Broadway history, a record it still holds. The French team of Claude-Michel Schönberg and Alain Boublil wrote Les Misérables, based on the novel of the same name, whose 1985 London production was produced by Mackintosh and became, and still is, the longest-running musical in West End and Broadway history. The team produced another hit with Miss Saigon (1989), which was inspired by the Puccini opera Madama Butterfly.The megamusicals' huge budgets redefined expectations for financial success on Broadway and in the West End. In earlier years, it was possible for a show to be considered a hit after a run of several hundred performances, but with multimillion-dollar production costs, a show must run for years simply to turn a profit. Megamusicals were also reproduced in productions around the world, multiplying their profit potential while expanding the global audience for musical theatre.
History:
1990s In the 1990s, a new generation of theatrical composers emerged, including Jason Robert Brown and Michael John LaChiusa, who began with productions off-Broadway. The most conspicuous success of these artists was Jonathan Larson's show Rent (1996), a rock musical (based on the opera La bohème) about a struggling community of artists in Manhattan. While the cost of tickets to Broadway and West End musicals was escalating beyond the budget of many theatregoers, Rent was marketed to increase the popularity of musicals among a younger audience. It featured a young cast and a heavily rock-influenced score; the musical became a hit. Its young fans, many of them students, calling themselves RENTheads], camped out at the Nederlander Theatre in hopes of winning the lottery for $20 front row tickets, and some saw the show dozens of times. Other shows on Broadway followed Rent's lead by offering heavily discounted day-of-performance or standing-room tickets, although often the discounts are offered only to students.The 1990s also saw the influence of large corporations on the production of musicals. The most important has been Disney Theatrical Productions, which began adapting some of Disney's animated film musicals for the stage, starting with Beauty and the Beast (1994), The Lion King (1997) and Aida (2000), the latter two with music by Elton John. The Lion King is the highest-grossing musical in Broadway history. The Who's Tommy (1993), a theatrical adaptation of the rock opera Tommy, achieved a healthy run of 899 performances but was criticized for sanitizing the story and "musical theatre-izing" the rock music.Despite the growing number of large-scale musicals in the 1980s and 1990s, a number of lower-budget, smaller-scale musicals managed to find critical and financial success, such as Falsettoland, Little Shop of Horrors, Bat Boy: The Musical and Blood Brothers, which ran for 10,013 performances. The topics of these pieces vary widely, and the music ranges from rock to pop, but they often are produced off-Broadway, or for smaller London theatres, and some of these stagings have been regarded as imaginative and innovative.
History:
2000s–present Trends In the new century, familiarity has been embraced by producers and investors anxious to guarantee that they recoup their considerable investments. Some took (usually modest-budget) chances on new and creative material, such as Urinetown (2001), Avenue Q (2003), The Light in the Piazza (2005), Spring Awakening (2006), In the Heights (2008), Next to Normal (2009), American Idiot (2010) and The Book of Mormon (2011). Hamilton (2015), transformed "under-dramatized American history" into an unusual hip-hop inflected hit. In 2011, Sondheim argued that of all forms of "contemporary pop music", rap was "the closest to traditional musical theatre" and was "one pathway to the future."However, most major-market 21st-century productions have taken a safe route, with revivals of familiar fare, such as Fiddler on the Roof, A Chorus Line, South Pacific, Gypsy, Hair, West Side Story and Grease, or with adaptations of other proven material, such as literature (The Scarlet Pimpernel, Wicked and Fun Home), hoping that the shows would have a built-in audience as a result. This trend is especially persistent with film adaptations, including (The Producers, Spamalot, Hairspray, Legally Blonde, The Color Purple, Xanadu, Billy Elliot, Shrek, Waitress and Groundhog Day). Some critics have argued that the reuse of film plots, especially those from Disney (such as Mary Poppins and The Little Mermaid), equate the Broadway and West End musical to a tourist attraction, rather than a creative outlet.
History:
Today, it is less likely that a sole producer, such as David Merrick or Cameron Mackintosh, backs a production. Corporate sponsors dominate Broadway, and often alliances are formed to stage musicals, which require an investment of $10 million or more. In 2002, the credits for Thoroughly Modern Millie listed ten producers, and among those names were entities composed of several individuals. Typically, off-Broadway and regional theatres tend to produce smaller and therefore less expensive musicals, and development of new musicals has increasingly taken place outside of New York and London or in smaller venues. For example, Spring Awakening, Fun Home and Hamilton were developed off-Broadway before being launched on Broadway.
History:
Several musicals returned to the spectacle format that was so successful in the 1980s, recalling extravaganzas that have been presented at times, throughout theatre history, since the ancient Romans staged mock sea battles. Examples include the musical adaptations of Lord of the Rings (2007), Gone with the Wind (2008) and Spider-Man: Turn Off the Dark (2011). These musicals involved songwriters with little theatrical experience, and the expensive productions generally lost money. Conversely, The Drowsy Chaperone, Avenue Q, The 25th Annual Putnam County Spelling Bee, Xanadu and Fun Home, among others, have been presented in smaller-scale productions, mostly uninterrupted by an intermission, with short running times, and enjoyed financial success. In 2013, Time magazine reported that a trend off-Broadway has been "immersive" theatre, citing shows such as Natasha, Pierre & The Great Comet of 1812 (2012) and Here Lies Love (2013) in which the staging takes place around and within the audience. The shows set a joint record, each receiving 11 nominations for Lucille Lortel Awards, and feature contemporary scores.In 2013, Cyndi Lauper was the "first female composer to win the [Tony for] Best Score without a male collaborator" for writing the music and lyrics for Kinky Boots. In 2015, for the first time, an all-female writing team, Lisa Kron and Jeanine Tesori, won the Tony Award for Best Original Score (and Best Book for Kron) for Fun Home, although work by male songwriters continues to be produced more often.
History:
Jukebox musicals Another trend has been to create a minimal plot to fit a collection of songs that have already been hits. Following the earlier success of Buddy – The Buddy Holly Story, these have included Movin' Out (2002, based on the tunes of Billy Joel), Jersey Boys (2006, The Four Seasons), Rock of Ages (2009, featuring classic rock of the 1980s), Thriller – Live (2009, Michael Jackson), and many others. This style is often referred to as the "jukebox musical". Similar but more plot-driven musicals have been built around the canon of a particular pop group including Mamma Mia! (1999, based on the songs of ABBA), Our House (2002, based on the songs of Madness) and We Will Rock You (2002, based on the songs of Queen).
History:
Film and TV musicals Live-action film musicals were nearly dead in the 1980s and early 1990s, with exceptions of Victor/Victoria, Little Shop of Horrors and the 1996 film of Evita. In the new century, Baz Luhrmann began a revival of the film musical with Moulin Rouge! (2001). This was followed by Chicago (2002); Phantom of the Opera (2004); Rent (2005); Dreamgirls (2006); Hairspray, Enchanted and Sweeney Todd (all in 2007); Mamma Mia! (2008); Nine (2009); Les Misérables and Pitch Perfect (both in 2012), Into The Woods, The Last Five Years (2014), La La Land (2016), The Greatest Showman (2017), A Star Is Born and Mary Poppins Returns (both 2018), Rocketman (2019) and In the Heights and Steven Spielberg's version of West Side Story (both in 2021), among others. Dr. Seuss's How the Grinch Stole Christmas! (2000) and The Cat in the Hat (2003), turned children's books into live-action film musicals. After the immense success of Disney and other houses with animated film musicals beginning with The Little Mermaid in 1989 and running throughout the 1990s (including some more adult-themed films, like South Park: Bigger, Longer & Uncut (1999)), fewer animated film musicals were released in the first decade of the 21st century. The genre made a comeback beginning in 2010 with Tangled (2010), Rio (2011) and Frozen (2013). In Asia, India continues to produce numerous "Bollywood" film musicals, and Japan produces "Anime" and "Manga" film musicals.
History:
Made for TV musical films were popular in the 1990s, such as Gypsy (1993), Cinderella (1997) and Annie (1999). Several made for TV musicals in the first decade of the 21st century were adaptations of the stage version, such as South Pacific (2001), The Music Man (2003) and Once Upon a Mattress (2005), and a televised version of the stage musical Legally Blonde in 2007. Additionally, several musicals were filmed on stage and broadcast on Public Television, for example Contact in 2002 and Kiss Me, Kate and Oklahoma! in 2003. The made-for-TV musical High School Musical (2006), and its several sequels, enjoyed particular success and were adapted for stage musicals and other media.
History:
In 2013, NBC began a series of live television broadcasts of musicals with The Sound of Music Live! Although the production received mixed reviews, it was a ratings success. Further broadcasts have included Peter Pan Live! (NBC 2014), The Wiz Live! (NBC 2015), a UK broadcast, The Sound of Music Live (ITV 2015) Grease: Live (Fox 2016), Hairspray Live! (NBC, 2016), A Christmas Story Live! (Fox, 2017), and Rent: Live (Fox 2019).Some television shows have set episodes as a musical. Examples include episodes of Ally McBeal, Xena: Warrior Princess ("The Bitter Suite" and "Lyre, Lyre, Heart's On Fire"), Psych ("Psych: The Musical"), Buffy the Vampire Slayer ("Once More, with Feeling"), That's So Raven, Daria, Dexter's Laboratory, The Powerpuff Girls, The Flash, Once Upon a Time, Oz, Scrubs (one episode was written by the creators of Avenue Q), Batman: The Brave and the Bold ("Mayhem of the Music Meister") and That '70s Show (the 100th episode, "That '70s Musical"). Others have included scenes where characters suddenly begin singing and dancing in a musical-theatre style during an episode, such as in several episodes of The Simpsons, 30 Rock, Hannah Montana, South Park, Bob's Burgers and Family Guy. Television series that have extensively used the musical format have included Cop Rock, Flight of the Conchords, Glee, Smash and Crazy Ex-Girlfriend.
History:
There have also been musicals made for the internet, including Dr. Horrible's Sing-Along Blog, about a low-rent super-villain played by Neil Patrick Harris. It was written during the WGA writer's strike. Since 2006, reality TV shows have been used to help market musical revivals by holding a talent competition to cast (usually female) leads. Examples of these are How Do You Solve a Problem like Maria?, Grease: You're the One That I Want!, Any Dream Will Do, Legally Blonde: The Musical – The Search for Elle Woods, I'd Do Anything and Over the Rainbow. In 2021, Schmigadoon! was a parody of, and homage to, Golden Age musicals of the 1940s and 1950s.
History:
2020–2021 theatre shutdown The COVID-19 pandemic caused the closure of theatres and theatre festivals around the world in early 2020, including all Broadway and West End theatres. Many performing arts institutions attempted to adapt, or reduce their losses, by offering new (or expanded) digital services. In particular this resulted in the online streaming of previously recorded performances of many companies, as well as bespoke crowdsourcing projects. For example, The Sydney Theatre Company commissioned actors to film themselves at home discussing, then performing, a monologue from one of the characters they had previously played on stage. The casts of musicals, such as Hamilton and Mamma Mia! united on Zoom calls to entertain individuals and the public. Some performances were streamed live, or presented outdoors or in other "socially distanced" ways, sometimes allowing audience members to interact with the cast. Radio theatre festivals were broadcast. Virtual, and even crowd-sourced musicals were created, such as Ratatouille the Musical. Filmed versions of major musicals, like Hamilton, were released on streaming platforms. Andrew Lloyd Webber released recordings of his musicals on YouTube.Due to the closures and loss of ticket sales, many theatre companies were placed in financial peril. Some governments offered emergency aid to the arts. Some musical theatre markets began to reopen in fits and starts by early 2021, with West End theatres postponing their reopening from June to July, and Broadway starting in September. Throughout 2021, however, spikes in the pandemic have caused some closures even after markets reopened.
International musicals:
The U.S. and Britain were the most active sources of book musicals from the 19th century through much of the 20th century (although Europe produced various forms of popular light opera and operetta, for example Spanish Zarzuela, during that period and even earlier). However, the light musical stage in other countries has become more active in recent decades.
International musicals:
Musicals from other English-speaking countries (notably Australia and Canada) often do well locally and occasionally even reach Broadway or the West End (e.g., The Boy from Oz and The Drowsy Chaperone). South Africa has an active musical theatre scene, with revues like African Footprint and Umoja and book musicals, such as Kat and the Kings and Sarafina! touring internationally. Locally, musicals like Vere, Love and Green Onions, Over the Rainbow: the all-new all-gay... extravaganza and Bangbroek Mountain and In Briefs – a queer little Musical have been produced successfully.
International musicals:
Successful musicals from continental Europe include shows from (among other countries) Germany (Elixier and Ludwig II), Austria (Tanz der Vampire, Elisabeth, Mozart! and Rebecca), Czech Republic (Dracula), France (Starmania, Notre-Dame de Paris, Les Misérables, Roméo et Juliette and Mozart, l'opéra rock) and Spain (Hoy no me puedo levantar and The Musical Sancho Panza).
International musicals:
Japan has recently seen the growth of an indigenous form of musical theatre, both animated and live action, mostly based on Anime and Manga, such as Kiki's Delivery Service and Tenimyu. The popular Sailor Moon metaseries has had twenty-nine Sailor Moon musicals, spanning thirteen years. Beginning in 1914, a series of popular revues have been performed by the all-female Takarazuka Revue, which currently fields five performing troupes. Elsewhere in Asia, the Indian Bollywood musical, mostly in the form of motion pictures, is tremendously successful.Beginning with a 2002 tour of Les Misérables, various Western musicals have been imported to mainland China and staged in English. Attempts at localizing Western productions in China began in 2008 when Fame was produced in Mandarin with a full Chinese cast at the Central Academy of Drama in Beijing. Since then, other western productions have been staged in China in Mandarin with a Chinese cast. The first Chinese production in the style of Western musical theatre was The Gold Sand in 2005. In addition, Li Dun, a well-known Chinese producer, produced Butterflies, based on a classic Chinese love tragedy, in 2007 as well as Love U Teresa in 2011.
Amateur and school productions:
Musicals are often presented by amateur and school groups in churches, schools and other performance spaces. Although amateur theatre has existed for centuries, even in the New World, François Cellier and Cunningham Bridgeman wrote, in 1914, that prior to the late 19th century, amateur actors were treated with contempt by professionals. After the formation of amateur Gilbert and Sullivan companies licensed to perform the Savoy operas, professionals recognized that the amateur societies "support the culture of music and the drama. They are now accepted as useful training schools for the legitimate stage, and from the volunteer ranks have sprung many present-day favourites." The National Operatic and Dramatic Association was founded in the UK in 1899. It reported, in 1914, that nearly 200 amateur dramatic societies were producing Gilbert and Sullivan works in Britain that year. Similarly, more than 100 community theatres were founded in the US in the early 20th century. This number has grown to an estimated 18,000 in the US. The Educational Theater Association in the US has nearly 5,000 member schools.
Relevance:
The Broadway League announced that in the 2007–08 season, 12.27 million tickets were purchased for Broadway shows for a gross sale amount of almost a billion dollars. The League further reported that during the 2006–07 season, approximately 65% of Broadway tickets were purchased by tourists, and that foreign tourists were 16% of attendees. The Society of London Theatre reported that 2007 set a record for attendance in London. Total attendees in the major commercial and grant-aided theatres in Central London were 13.6 million, and total ticket revenues were £469.7 million. The international musicals scene has been increasingly active in recent decades. Nevertheless, Stephen Sondheim commented in the year 2000: You have two kinds of shows on Broadway – revivals and the same kind of musicals over and over again, all spectacles. You get your tickets for The Lion King a year in advance, and essentially a family ... pass on to their children the idea that that's what the theater is – a spectacular musical you see once a year, a stage version of a movie. It has nothing to do with theater at all. It has to do with seeing what is familiar. ... I don't think the theatre will die per se, but it's never going to be what it was. ... It's a tourist attraction." However, noting the success in recent decades of original material, and creative re-imaginings of film, plays and literature, theatre historian John Kenrick countered: Is the Musical dead? ... Absolutely not! Changing? Always! The musical has been changing ever since Offenbach did his first rewrite in the 1850s. And change is the clearest sign that the musical is still a living, growing genre. Will we ever return to the so-called 'golden age', with musicals at the center of popular culture? Probably not. Public taste has undergone fundamental changes, and the commercial arts can only flow where the paying public allows.
Notes and references:
Cited books Allain, Paul; Harvie, Jen (2014). The Routledge Companion to Theatre and Performance. Routledge. ISBN 978-0-4156-3631-5.
Allen, Robert C. (c. 1991). Horrible Prettiness: Burlesque and American Culture. University of North Carolina. ISBN 978-0-8078-1960-9.
Bradley, Ian (2005). Oh Joy! Oh Rapture! The Enduring Phenomenon of Gilbert and Sullivan. Oxford University Press. ISBN 0-19-516700-7.
Buelow, George J. (2004). A History of Baroque Music. Bloomington, Indiana: Indiana University Press. ISBN 978-0-253-34365-9.
Carter, Tim; Butt, John, eds. (2005). The Cambridge History of Seventeenth-Century Music. The Cambridge History of Music. Vol. 1. Cambridge University Press. p. 591. ISBN 978-0-521-79273-8. Archived from the original on 2013-01-12. Retrieved 2009-05-26.
Cohen, Robert; Sherman, Donovan (2020). Theatre: Brief (Twelfth ed.). New York City: McGraw-Hill Education. ISBN 978-1-260-05738-6. OCLC 1073038874.
Everett, William A.; Laird, Paul R., eds. (2002). The Cambridge Companion to the Musical. Cambridge Companions to Music. Cambridge University Press. ISBN 978-0-521-79189-2.
Gänzl, Kurt; Andrew Lamb (1988). Gänzl's Book of the Musical Theatre. London: The Bodley Head. OCLC 966051934.
Gokulsing, K. Moti; Dissanayake, Wimal (2004) [1998]. Indian popular cinema : a narrative of cultural change (Revised and updated ed.). Stoke-on-Trent: Trentham. p. 161. ISBN 978-1-85856-329-9.
Herbert, Ian, ed. (1972). Who's Who in the Theatre (fifteenth ed.). London: Sir Isaac Pitman and Sons. ISBN 978-0-273-31528-5.
Hoppin, Richard H., ed. (1978). Anthology of Medieval Music. Norton introduction to music history. New York: Norton. ISBN 978-0-393-09080-2.
Horn, Barbara Lee (1991). The Age of Hair: Evolution and Impact of Broadway's First Rock Musical. New York: Greenwood Press. p. 166. ISBN 978-0-313-27564-7.
Jha, Subhash K. (2005). The Essential Guide to Bollywood. Roli Books. ISBN 81-7436-378-5.
Jones, John B. (2003). Our Musicals, Ourselves. Hanover: University Press of New England. ISBN 978-0-87451-904-4.
Lord, Suzanne (2003). Brinkman, David (ed.). Music from the Age of Shakespeare : A Cultural History. Westport, Connecticut: Greenwood Press. ISBN 978-0-313-31713-2.
Lubbock, Mark (2002) [1962]. "American musical theatre: an introduction". The Complete Book of Light Opera (1st ed.). London: Putnam. pp. 753–756.
Morley, Sheridan (1987). Spread a little happiness: the first hundred years of the British musical. London: Thames and Hudson. ISBN 978-0-500-01398-4.
Parker, John, ed. (1925). Who's Who in the Theatre (fifth ed.). London: Sir Isaac Pitman and Sons. OCLC 10013159.
Parker, Roger, ed. (2001). The Oxford Illustrated History of Opera. Oxford Illustrated Histories (illustrated ed.). Oxford University Press. p. 541. ISBN 978-0-19-285445-2.
Rubin, Don; Solórzano, Carlos, eds. (2000). The World Encyclopedia of Contemporary Theatre: The Americas. New York City: Routledge. ISBN 0-415-05929-1.
Shakespeare, William (1998) [First published 1623]. Orgel, Stephen (ed.). The Tempest. The Oxford Shakespeare. Oxford University Press. p. 248. ISBN 978-0-19-283414-0.
Wilmeth, Don B.; Miller, Tice L., eds. (1996). Cambridge Guide to American Theatre (2nd ed.). Cambridge University Press. ISBN 978-0-521-56444-1.
Wollman, E. L. (2006). The Theater Will Rock: a History of the Rock Musical: From Hair to Hedwig. Michigan: University of Michigan Press. ISBN 0-472-11576-6. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mondrian OLAP server**
Mondrian OLAP server:
Mondrian is an open source OLAP (online analytical processing) server, written in Java. It supports the MDX (multidimensional expressions) query language and the XML for Analysis and olap4j interface specifications. It reads from SQL and other data sources and aggregates data in a memory cache.
Mondrian OLAP server:
Mondrian is used for: High performance, interactive analysis of large or small volumes of information Dimensional exploration of data, for example analyzing sales by product line, by region, by time period Parsing the MDX language into Structured Query Language (SQL) to retrieve answers to dimensional queries High-speed queries through the use of aggregate tables in the RDBMS Advanced calculations using the calculation expressions of the MDX language
Mondrian History:
The first public release of Mondrian was on August 9, 2002. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Photographic plate**
Photographic plate:
Photographic plates preceded photographic film as a capture medium in photography, and were still used in some communities up until the late 20th century. The light-sensitive emulsion of silver salts was coated on a glass plate, typically thinner than common window glass.
History:
Glass plates were far superior to film for research-quality imaging because they were stable and less likely to bend or distort, especially in large-format frames for wide-field imaging. Early plates used the wet collodion process. The wet plate process was replaced late in the 19th century by gelatin dry plates. A view camera nicknamed "The Mammoth" weighing 1,400 pounds (640 kg) was built by George R. Lawrence in 1899, specifically to photograph "The Alton Limited" train owned by the Chicago & Alton Railway. It took photographs on glass plates measuring 8 feet (2.4 m) × 4.5 feet (1.4 m).Glass plate photographic material largely faded from the consumer market in the early years of the 20th century, as more convenient and less fragile films were increasingly adopted. However, photographic plates were reportedly still being used by one photography business in London until the 1970s, and by one in Bradford called the Belle Vue Studio that closed in 1975. They were in wide use by the professional astronomical community as late as the 1990s. Workshops on the use of glass plate photography as an alternative medium or for artistic use are still being conducted.
Scientific uses:
Astronomy Many famous astronomical surveys were taken using photographic plates, including the first Palomar Observatory Sky Survey (POSS) of the 1950s, the follow-up POSS-II survey of the 1990s, and the UK Schmidt survey of southern declinations. A number of observatories, including Harvard College and Sonneberg, maintain large archives of photographic plates, which are used primarily for historical research on variable stars.
Scientific uses:
Many solar system objects were discovered by using photographic plates, superseding earlier visual methods. Discovery of minor planets using photographic plates was pioneered by Max Wolf beginning with his discovery of 323 Brucia in 1891. The first natural satellite discovered using photographic plates was Phoebe in 1898. Pluto was discovered using photographic plates in a blink comparator; its moon Charon was discovered 48 years later in 1978 by U.S. Naval Observatory astronomer James W. Christy by carefully examining a bulge in Pluto's image on a photographic plate.
Scientific uses:
Glass-backed plates, rather than film, were generally used in astronomy because they do not shrink or deform noticeably in the development process or under environmental changes. Several important applications of astrophotography, including astronomical spectroscopy and astrometry, continued using plates until digital imaging improved to the point where it could outmatch photographic results. Kodak and other manufacturers discontinued production of most kinds of plates as the market for them dwindled between 1980 and 2000, terminating most remaining astronomical use, including for sky surveys.
Scientific uses:
Physics Photographic plates were also an important tool in early high-energy physics, as they are blackened by ionizing radiation. Ernest Rutherford was one of the first to study the absorption, in various materials, of the rays produced in radioactive decay, by using photographic plates to measure the intensity of the rays. Development of particle detection optimised nuclear emulsions in the 1930s and 1940s, first in physics laboratories, then by commercial manufacturers, enabled the discovery and measurement of both the pi-meson and K-meson, in 1947 and 1949, initiating a flood of new particle discoveries in the second half of the 20th century.
Scientific uses:
Electron microscopy Photographic emulsions were originally coated on thin glass plates for imaging with electron microscopes, which provided a more rigid, stable and flatter plane compared to plastic films. Beginning in the 1970s, high-contrast, fine grain emulsions coated on thicker plastic films manufactured by Kodak, Ilford and DuPont replaced glass plates. These films have largely been replaced by digital imaging technologies.
Medical imaging:
The sensitivity of certain types of photographic plates to ionizing radiation (usually X-rays) is also useful in medical imaging and material science applications, although they have been largely replaced with reusable and computer readable image plate detectors and other types of X-ray detectors.
Decline:
The earliest flexible films of the late 1880s were sold for amateur use in medium-format cameras. The plastic was not of very high optical quality and tended to curl and otherwise not provide as desirably flat a support surface as a sheet of glass. Initially, a transparent plastic base was more expensive to produce than glass. Quality was eventually improved, manufacturing costs came down, and most amateurs gladly abandoned plates for films. After large-format high quality cut films for professional photographers were introduced in the late 1910s, the use of plates for ordinary photography of any kind became increasingly rare.
Decline:
The persistent use of plates in astronomical and other scientific applications started to decline in the early 1980s as they were gradually replaced by charge-coupled devices (CCDs), which also provide outstanding dimensional stability. CCD cameras have several advantages over glass plates, including high efficiency, linear light response, and simplified image acquisition and processing. However, even the largest CCD formats (e.g., 8192 × 8192 pixels) still do not have the detecting area and resolution of most photographic plates, which has forced modern survey cameras to use large CCD arrays to obtain the same coverage.
Decline:
The manufacture of photographic plates has been discontinued by Kodak, Agfa and other widely known traditional makers. Eastern European sources have subsequently catered to the minimal remaining demand, practically all of it for use in holography, which requires a recording medium with a large surface area and a submicroscopic level of resolution that currently (2014) available electronic image sensors cannot provide. In the realm of traditional photography, a small number of historical process enthusiasts make their own wet or dry plates from raw materials and use them in vintage large-format cameras.
Preservation:
Several institutions have established archives to preserve photographic plates and prevent their valuable historical information from being lost. The emulsion on the plate can deteriorate. In addition, the glass plate medium is fragile and prone to cracking if not stored correctly.
Historical archives The United States Library of Congress has a large collection of both wet and dry plate photographic negatives, dating from 1855 through 1900, over 7,500 of which have been digitized from the period 1861 to 1865.
Preservation:
The George Eastman Museum holds an extensive collection of photographic plates. In 1955, wet plate negatives measuring 4 feet 6 inches (1.37 m) × 3 feet 2 inches (0.97 m) were reported to have been discovered in 1951 as part of the Holtermann Collection. These purportedly were the largest glass negatives discovered at that time. These images were taken in 1875 by Charles Bayliss and formed the "Shore Tower" panorama of Sydney Harbour. Albumen contact prints made from these negatives are in the holdings of the Holtermann Collection, the negatives are listed among the current holdings of the Collection.
Preservation:
Scientific archives Preservation of photographic plates is a particular need in astronomy, where changes often occur slowly and the plates represent irreplaceable records of the sky and astronomical objects that extend back over 100 years. The method of digitization of astronomical plates enables free and easy access to those unique astronomical data and it is one of the most popular approaches to preserve them. This approach was applied at the Baldone Astrophysical Observatory where about 22,000 glass and film plates of the Schmidt Telescope were scanned and cataloged. Another example of an astronomical plate archive is the Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI). APDA was created in response to recommendations of a group of international scientists who gathered in 2007 to discuss how to best preserve astronomical plates (see the Osborn and Robbins reference listed under Further reading). The discussions revealed that some observatories no longer could maintain their plate collections and needed a place to archive them. APDA is dedicated to housing and cataloging unwanted plates, with the goal to eventually catalog the plates and create a database of images that can be accessed via the Internet by the global community of scientists, researchers, and students. APDA now has a collection of more than 404,000 photographic images from over 40 observatories that are housed in a secure building with environmental control. The facility possesses several plate scanners, including two high-precision ones, GAMMA I and GAMMA II, built for NASA and the Space Telescope Science Institute (STScI) and used by a team under the leadership of the late Barry Lasker to develop the Guide Star Catalog and Digitized Sky Survey that are used to guide and direct the Hubble Space Telescope. APDA's networked storage system can store and analyze more than 100 terabytes of data.A historical collection of photographic plates from Mt. Wilson observatory is available at the Carnegie Observatories. Metadata is available via a searchable database, while a portion of the plates has been digitized. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**J-CODE**
J-CODE:
J-CODE, an acronym for Joint Criminal Opioid Darknet Enforcement, is an FBI operation announced by U.S. Attorney General Jeff Sessions on January 29, 2018, in Pittsburgh, Pennsylvania which targets illegal opioid distribution on the Darknet. Given the integrity and robustness of the hidden services of the Tor anonymity network, however, sting operations, the seizure of servers, the tracking of postal deliveries, and in general the exploitation of failures of operational security are expected to be standard operating procedure. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.