text
stringlengths
60
353k
source
stringclasses
2 values
**Earmark (agriculture)** Earmark (agriculture): An earmark is a cut or mark in the ear of livestock animals such as cattle, deer, pigs, goats, camels or sheep, made to show ownership, year of birth or sex. Earmark (agriculture): The term dates to the 16th century in England. For example, in a case of defamation in King's Bench in 1541, the defamatory statement included "George Butteler hath eremarked a mare of one Robert Hawk." The practice existed in the Near East up to the time of Islam. Against this, in Q. 4:119 the Qur'an quotes the Devil promising, ""I will mislead them, I will entice them, I will command them to mark the ears of livestock, and I will command them to distort the creation of God."Earmarks are typically registered when a stock owner registers a livestock brand for their use. There are many rules and regulations concerning the use of earmarks between states and countries. Tasmanian sheep and cattle must be earmarked before they become six months old.Generally the owner’s earmark is placed in a designated ear of a camel or sheep to indicate its gender. Typically if a registered earmark is used, it must be applied to the right ear for ewes and the left ear for female camels. The other ear of a sheep then may be used to show the year of its birth. Cattle earmarks are often a variety of knife cuts in the ear as an aid to identification, but it does not necessarily constitute proof of ownership. Earmark (agriculture): Since the 1950s it has been more common to use ear tags to identify livestock because coloured tags are capable of conveying more information than earmarks. Such ear tags were popularised by New Zealand dairy farmers in the earliest successful use of them.Because of the ubiquity of earmarking, in the nineteenth and twentieth centuries, it became common parlance to call any identifying mark an earmark. In early times many politicians were country or farming folk and were adept at using such words in different ways and in creating new concepts. Earmark (agriculture): Today it is common to refer to an institution's ability to designate funds for a specific use or owner as an earmark. Laboratory animals: Laboratory mice are often kept with several animals in one cage since mice are social animals, therefore it is necessary to have some method of identifying them individually. Earmarks may be used, although non-traumatic methods such as tattooing their tails and painting spots on white mice with crystal violet or permanent markers can be used as well. Microchips are less commonly used in mice because of their expense compared to the short life span of a mouse. Laboratory animals: Earmarking a mutant strain of mice called MRL/MpJ led to the accidental discovery that they had the ability to regenerate tissue very quickly, when scientists working with them found that the holes punched in their ears kept growing back. The holes healed over completely with regenerated cartilage, blood vessels, and skin with hair follicles. It was later found that this strain of mice also heals damage to other body parts such as knee cartilage and heart muscle significantly better than other mice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bird flight** Bird flight: Bird flight is the primary mode of locomotion used by most bird species in which birds take off and fly. Flight assists birds with feeding, breeding, avoiding predators, and migrating. Bird flight: Bird flight is one of the most complex forms of locomotion in the animal kingdom. Each facet of this type of motion, including hovering, taking off, and landing, involves many complex movements. As different bird species adapted over millions of years through evolution for specific environments, prey, predators, and other needs, they developed specializations in their wings, and acquired different forms of flight. Bird flight: Various theories exist about how bird flight evolved, including flight from falling or gliding (the trees down hypothesis), from running or leaping (the ground up hypothesis), from wing-assisted incline running or from proavis (pouncing) behavior. Basic mechanics of bird flight: Lift, drag and thrust The fundamentals of bird flight are similar to those of aircraft, in which the aerodynamic forces sustaining flight are lift, drag, and thrust. Lift force is produced by the action of air flow on the wing, which is an airfoil. The airfoil is shaped such that the air provides a net upward force on the wing, while the movement of air is directed downward. Additional net lift may come from airflow around the bird's body in some species, especially during intermittent flight while the wings are folded or semi-folded (cf. lifting body). Basic mechanics of bird flight: Aerodynamic drag is the force opposite to the direction of motion, and hence the source of energy loss in flight. The drag force can be separated into two portions, lift-induced drag, which is the inherent cost of the wing producing lift (this energy ends up primarily in the wingtip vortices), and parasitic drag, including skin friction drag from the friction of air and body surfaces and form drag from the bird's frontal area. The streamlining of bird's body and wings reduces these forces. Unlike aircraft, which have engines to produce thrust, birds flap their wings with a given flapping amplitude and frequency to generate thrust. Flight: Birds use mainly three types of flight, distinguished by wing motion. Flight: Gliding flight When in gliding flight, the upward aerodynamic force is equal to the weight. In gliding flight, no propulsion is used; the energy to counteract the energy loss due to aerodynamic drag is either taken from the potential energy of the bird, resulting in a descending flight, or is replaced by rising air currents ("thermals"), referred to as soaring flight. For specialist soaring birds (obligate soarers), the decision to engage in flight are strongly related to atmospheric conditions that allow individuals to maximise flight-efficiency and minimise energetic costs. Flight: Flapping flight When a bird flaps, as opposed to gliding, its wings continue to develop lift as before, but the lift is rotated forward to provide thrust, which counteracts drag and increases its speed, which has the effect of also increasing lift to counteract its weight, allowing it to maintain height or to climb. Flapping involves two stages: the down-stroke, which provides the majority of the thrust, and the up-stroke, which can also (depending on the bird's wings) provide some thrust. At each up-stroke the wing is slightly folded inwards to reduce the energetic cost of flapping-wing flight. Birds change the angle of attack continuously within a flap, as well as with speed. Flight: Bounding flight Small birds often fly long distances using a technique in which short bursts of flapping are alternated with intervals in which the wings are folded against the body. This is a flight pattern known as "bounding" or "flap-bounding" flight. When the bird's wings are folded, its trajectory is primarily ballistic, with a small amount of body lift. The flight pattern is believed to decrease the energy required by reducing the aerodynamic drag during the ballistic part of the trajectory, and to increase the efficiency of muscle use. Flight: Hovering Several bird species use hovering, with one family specialized for hovering – the hummingbirds. True hovering occurs by generating lift through flapping alone, rather than by passage through the air, requiring considerable energy expenditure. This usually confines the ability to smaller birds, but some larger birds, such as a kite or osprey can hover for a short period of time. Although not a true hover, some birds remain in a fixed position relative to the ground or water by flying into a headwind. Hummingbirds, kestrels, terns and hawks use this wind hovering. Flight: Most birds that hover have high aspect ratio wings that are suited to low speed flying. Hummingbirds are a unique exception – the most accomplished hoverers of all birds. Hummingbird flight is different from other bird flight in that the wing is extended throughout the whole stroke, which is a symmetrical figure of eight, with the wing producing lift on both the up- and down-stroke. Hummingbirds beat their wings at some 43 times per second, while others may be as high as 80 times per second. Flight: Take-off and landing Take-off is one of the most energetically demanding aspects of flight, as the bird must generate enough airflow across the wing to create lift. Small birds do this with a simple upward jump. However, this technique does not work for larger birds, such as albatrosses and swans, which instead must take a running start to generate sufficient airflow. Large birds take off by facing into the wind, or, if they can, by perching on a branch or cliff so they can just drop off into the air. Flight: Landing is also a problem for large birds with high wing loads. This problem is dealt with in some species by aiming for a point below the intended landing area (such as a nest on a cliff) then pulling up beforehand. If timed correctly, the airspeed once the target is reached is virtually nil. Landing on water is simpler, and the larger waterfowl species prefer to do so whenever possible, landing into wind and using their feet as skids. To lose height rapidly prior to landing, some large birds such as geese indulge in a rapid alternating series of sideslips or even briefly turning upside down in a maneuver termed whiffling. Wings: The bird's forelimbs (the wings) are the key to flight. Each wing has a central vane to hit the wind, composed of three limb bones, the humerus, ulna and radius. The hand, or manus, which ancestrally was composed of five digits, is reduced to three digits (digit II, III and IV or I, II, III depending on the scheme followed), which serves as an anchor for the primaries, one of two groups of flight feathers responsible for the wing's airfoil shape. The other set of flight feathers, behind the carpal joint on the ulna, are called the secondaries. The remaining feathers on the wing are known as coverts, of which there are three sets. The wing sometimes has vestigial claws. In most species, these are lost by the time the bird is adult (such as the highly visible ones used for active climbing by hoatzin chicks), but claws are retained into adulthood by the secretarybird, screamers, finfoots, ostriches, several swifts and numerous others, as a local trait, in a few specimens. Wings: Albatrosses have locking mechanisms in the wing joints that reduce the strain on the muscles during soaring flight.Even within a species wing morphology may differ. For example, adult European Turtle Doves have been found to have longer but more rounded wings than juveniles – suggesting that juvenile wing morphology facilitates their first migrations, while selection for flight maneuverability is more important after the juveniles' first molt.Female birds exposed to predators during ovulation produce chicks that grow their wings faster than chicks produced by predator-free females. Their wings are also longer. Both adaptations may make them better at avoiding avian predators. Wings: Wing shape The shape of the wing is important in determining the flight capabilities of a bird. Different shapes correspond to different trade-offs between advantages such as speed, low energy use, and maneuverability. Two important parameters are the aspect ratio and wing loading. Aspect ratio is the ratio of wingspan to the mean of its chord (or the square of the wingspan divided by wing area). A high aspect ratio results in long narrow wings that are useful for endurance flight because they generate more lift. Wing loading is the ratio of weight to wing area. Wings: Most kinds of bird wing can be grouped into four types, with some falling between two of these types. These types of wings are elliptical wings, high speed wings, high aspect ratio wings and slotted high-lift wings. Wings: Elliptical wings Technically, elliptical wings are those having elliptical (that is quarter ellipses) meeting conformally at the tips. The early model Supermarine Spitfire is an example. Some birds have vaguely elliptical wings, including the albatross wing of high aspect ratio. Although the term is convenient, it might be more precise to refer to curving taper with fairly small radius at the tips. Many small birds have a low aspect ratio with elliptical character (when spread), allowing for tight maneuvering in confined spaces such as might be found in dense vegetation. As such they are common in forest raptors (such as Accipiter hawks), and many passerines, particularly non-migratory ones (migratory species have longer wings). They are also common in species that use a rapid take off to evade predators, such as pheasants and partridges. Wings: High speed wings High speed wings are short, pointed wings that when combined with a heavy wing loading and rapid wingbeats provide an energetically expensive high speed. This type of flight is used by the bird with the fastest wing speed, the peregrine falcon, as well as by most of the ducks. Birds that make long migrations typically have this type of wing. The same wing shape is used by the auks for a different purpose; auks use their wings to "fly" underwater. Wings: The peregrine falcon has the highest recorded dive speed of 242 miles per hour (389 km/h). The fastest straight, powered flight is the spine-tailed swift at 105 mph (169 km/h). Wings: High aspect ratio wings High aspect ratio wings, which usually have low wing loading and are far longer than they are wide, are used for slower flight. This may take the form of almost hovering (as used by kestrels, terns and nightjars) or in soaring and gliding flight, particularly the dynamic soaring used by seabirds, which takes advantage of wind speed variation at different altitudes (wind shear) above ocean waves to provide lift. Low speed flight is also important for birds that plunge-dive for fish. Wings: Soaring wings with deep slots These wings are favored by larger species of inland birds, such as eagles, vultures, pelicans, and storks. The slots at the end of the wings, between the primaries, reduce the induced drag and wingtip vortices by "capturing" the energy in air flowing from the lower to upper wing surface at the tips, whilst the shorter size of the wings aids in takeoff (high aspect ratio wings require a long taxi to get airborne). Coordinated formation flight: A wide variety of birds fly together in a symmetric V-shaped or a J-shaped coordinated formation, also referred to as an "echelon", especially during long-distance flight or migration. It is often assumed that birds resort to this pattern of formation flying in order to save energy and improve the aerodynamic efficiency. The birds flying at the tips and at the front would interchange positions in a timely cyclical fashion to spread flight fatigue equally among the flock members. Coordinated formation flight: The wingtips of the leading bird in an echelon create a pair of opposite rotating line vortices. The vortices trailing a bird have an underwash part behind the bird, and at the same time they have an upwash on the outside, that hypothetically could aid the flight of a trailing bird. In a 1970 study the authors claimed that each bird in a V formation of 25 members can achieve a reduction of induced drag and as a result increase their range by 71%. It has also been suggested that birds' wings produce induced thrust at their tips, allowing for proverse yaw and net upwash at the last quarter of the wing. This would allow birds to overlap their wings and gain Newtonian lift from the bird in front.Studies of waldrapp ibis show that birds spatially coordinate the phase of wing flapping and show wingtip path coherence when flying in V positions, thus enabling them to maximally utilise the available energy of upwash over the entire flap cycle. In contrast, birds flying in a stream immediately behind another do not have wingtip coherence in their flight pattern and their flapping is out of phase, as compared to birds flying in V patterns, so as to avoid the detrimental effects of the downwash due to the leading bird's flight. Adaptations for flight: The most obvious adaptation to flight is the wing, but because flight is so energetically demanding birds have evolved several other adaptations to improve efficiency when flying. Birds' bodies are streamlined to help overcome air-resistance. Also, the bird skeleton is hollow to reduce weight, and many unnecessary bones have been lost (such as the bony tail of the early bird Archaeopteryx), along with the toothed jaw of early birds, which has been replaced with a lightweight beak. The skeleton's breastbone has also adapted into a large keel, suitable for the attachment of large, powerful flight muscles. The vanes of each feather have hooklets called barbules that zip the vanes of individual feathers together, giving the feathers the strength needed to hold the airfoil (these are often lost in flightless birds). The barbules maintain the shape and function of the feather. Each feather has a major (greater) side and a minor (lesser) side, meaning that the shaft or rachis does not run down the center of the feather. Rather it runs longitudinally off the center with the lesser or minor side to the front and the greater or major side to the rear of the feather. This feather anatomy, during flight and flapping of the wings, causes a rotation of the feather in its follicle. The rotation occurs in the up motion of the wing. The greater side points down, letting air slip through the wing. This essentially breaks the integrity of the wing, allowing for a much easier movement in the up direction. The integrity of the wing is reestablished in the down movement, which allows for part of the lift inherent in bird wings. This function is most important in taking off or achieving lift at very low or slow speeds where the bird is reaching up and grabbing air and pulling itself up. At high speeds the air foil function of the wing provides most of the lift needed to stay in flight. Adaptations for flight: The large amounts of energy required for flight have led to the evolution of a unidirectional pulmonary system to provide the large quantities of oxygen required for their high respiratory rates. This high metabolic rate produces large quantities of radicals in the cells that can damage DNA and lead to tumours. Birds, however, do not suffer from an otherwise expected shortened lifespan as their cells have evolved a more efficient antioxidant system than those found in other animals.In addition to anatomical and metabolic modifications, birds have also adapted their behavior to a life in air. To avoid flying into each other, birds take to the right when they're on a collision course with other birds. Evolution of bird flight: Most paleontologists agree that birds evolved from small theropod dinosaurs, but the origin of bird flight is one of the oldest and most hotly contested debates in paleontology. The four main hypotheses are: From the trees down, that birds' ancestors first glided down from trees and then acquired other modifications that enabled true powered flight. From the ground up, that birds' ancestors were small, fast predatory dinosaurs in which feathers developed for other reasons and then evolved further to provide first lift and then true powered flight. Wing-assisted incline running (WAIR), a version of "from the ground up" in which birds' wings originated from forelimb modifications that provided downforce, enabling the proto-birds to run up extremely steep slopes such as the trunks of trees. Evolution of bird flight: Pouncing proavis, which posits that flight evolved by modification from arboreal ambush tactics.There has also been debate about whether the earliest known bird, Archaeopteryx, could fly. It appears that Archaeopteryx had the avian brain structures and inner-ear balance sensors that birds use to control their flight. Archaeopteryx also had a wing feather arrangement like that of modern birds and similarly asymmetrical flight feathers on its wings and tail. But Archaeopteryx lacked the shoulder mechanism by which modern birds' wings produce swift, powerful upstrokes; this may mean that it and other early birds were incapable of flapping flight and could only glide. The presence of most fossils in marine sediments in habitats devoid of vegetation has led to the hypothesis that they may have used their wings as aids to run across the water surface in the manner of the basilisk lizards.In March 2018, scientists reported that Archaeopteryx was likely capable of flight, but in a manner substantially different from that of modern birds. Evolution of bird flight: From the trees down This was the earliest hypothesis, encouraged by the examples of gliding vertebrates such as flying squirrels. It suggests that proto-birds like Archaeopteryx used their claws to clamber up trees and glided off from the tops.Some recent research undermines the "trees down" hypothesis by suggesting that the earliest birds and their immediate ancestors did not climb trees. Modern birds that forage in trees have much more curved toe-claws than those that forage on the ground. The toe-claws of Mesozoic birds and of closely related non-avian theropod dinosaurs are like those of modern ground-foraging birds. Evolution of bird flight: From the ground up Feathers have been discovered in a variety of coelurosaurian dinosaurs (including the early tyrannosauroid Dilong). Modern birds are classified as coelurosaurs by nearly all palaeontologists. The original functions of feathers may have included thermal insulation and competitive displays. The most common version of the "from the ground up" hypothesis argues that bird's ancestors were small ground-running predators (rather like roadrunners) that used their forelimbs for balance while pursuing prey and that the forelimbs and feathers later evolved in ways that provided gliding and then powered flight. Another "ground upwards" theory argues the evolution of flight was initially driven by competitive displays and fighting: displays required longer feathers and longer, stronger forelimbs; many modern birds use their wings as weapons, and downward blows have a similar action to that of flapping flight. Many of the Archaeopteryx fossils come from marine sediments and it has been suggested that wings may have helped the birds run over water in the manner of the common basilisk.Most recent attacks on the "from the ground up" hypothesis attempt to refute its assumption that birds are modified coelurosaurian dinosaurs. The strongest attacks are based on embryological analyses, which conclude that birds' wings are formed from digits 2, 3 and 4 (corresponding to the index, middle and ring fingers in humans; the first of a bird's 3 digits forms the alula, which they use to avoid stalling on low-speed flight, for example when landing); but the hands of coelurosaurs are formed by digits 1, 2 and 3 (thumb and first 2 fingers in humans). However these embryological analyses were immediately challenged on the embryological grounds that the "hand" often develops differently in clades that have lost some digits in the course of their evolution, and therefore bird's hands do develop from digits 1, 2 and 3. Evolution of bird flight: Wing-assisted incline running The wing-assisted incline running (WAIR) hypothesis was prompted by observation of young chukar chicks, and proposes that wings developed their aerodynamic functions as a result of the need to run quickly up very steep slopes such as tree trunks, for example to escape from predators. Note that in this scenario birds need downforce to give their feet increased grip. But early birds, including Archaeopteryx, lacked the shoulder mechanism that modern birds' wings use to produce swift, powerful upstrokes. Since the downforce that WAIR requires is generated by upstrokes, it seems that early birds were incapable of WAIR. Evolution of bird flight: Pouncing proavis model The proavis theory was first proposed by Garner, Taylor, and Thomas in 1999: We propose that birds evolved from predators that specialized in ambush from elevated sites, using their raptorial hindlimbs in a leaping attack. Drag–based, and later lift-based, mechanisms evolved under selection for improved control of body position and locomotion during the aerial part of the attack. Selection for enhanced lift-based control led to improved lift coefficients, incidentally turning a pounce into a swoop as lift production increased. Selection for greater swooping range would finally lead to the origin of true flight. Evolution of bird flight: The authors believed that this theory had four main virtues: It predicts the observed sequence of character acquisition in avian evolution. It predicts an Archaeopteryx-like animal, with a skeleton more or less identical to terrestrial theropods, with few adaptations to flapping, but very advanced aerodynamic asymmetrical feathers. It explains that primitive pouncers (perhaps like Microraptor) could coexist with more advanced fliers (like Confuciusornis or Sapeornis) since they did not compete for flying niches. It explains that the evolution of elongated rachis-bearing feathers began with simple forms that produced a benefit by increasing drag. Later, more refined feather shapes could begin to also provide lift. Uses and loss of flight in modern birds Birds use flight to obtain prey on the wing, for foraging, to commute to feeding grounds, and to migrate between the seasons. It is also used by some species to display during the breeding season and to reach safe isolated places for nesting. Flight is more energetically expensive in larger birds, and many of the largest species fly by soaring and gliding (without flapping their wings) as much as possible. Many physiological adaptations have evolved that make flight more efficient. Birds that settle on isolated oceanic islands that lack ground-based predators may over the course of evolution lose the ability to fly. One such example is the flightless cormorant, native to the Galápagos Islands. This illustrates both flight's importance in avoiding predators and its extreme demand for energy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Site Server** Microsoft Site Server: Microsoft Site Server, first released in 1996, is Microsoft's discontinued solution to the growing difficulty of managing complex websites which included multiple technologies, such as user management and authentication/authorization, content management, analysis, and indexing and search. Site Server 2.0, released in early 1997, incorporated electronic commerce technology from Microsoft Merchant Server, Microsoft's first effort at providing a solution to the growing business of Internet-based commerce (or e-commerce). During the course of its evolution (culminating with Site Server 3.0), Site Server expanded on Merchant Server's functionality by annexing content management tools; which would typically be involved, it was thought, in facilitating the management of Web-facing content. Consequently, Site Server became not only a solution for businesses wanting to sell products online, but companies who had corporate intranet servers hosting documents. Microsoft Site Server: Although Site Server went through several iterations, the most widely discussed and perhaps widely adopted version was the last, Site Server 3.0, released in 1998. The primary areas of Site Server 3.0 functionality included: Indexing and Search Content Management Product Management Order Processing Site Personalization Ad Server Product Legacy: For its time Site Server offered one very credible among a select few alternatives for such functionality - particularly on the Windows platform. At its release it generally came out to very positive reviews in technical journals, although compared to later products its management tools were on the arcane side. The content management functionality was adequate, but not particularly competitive with dedicated document management systems that were available at the time. On this front, Site Server's main advantage was its low cost. Another feature that might have been a source of confusion was the taxonomy management system. The tools used to maintain item metadata were very basic and required a degree of technical familiarity foreign to most business users. Product Legacy: On the plus side, once configured, Site Server Commerce Edition got very high ratings for management of conducting e-commerce. Management of products and orders was fairly sophisticated - a strength that would be extended in the technology that succeeded it: Microsoft Commerce Server. Related Technologies: Site Server required the presence of either the Windows NT 4.0 or Windows 2000 operating systems. It was also dependent on Microsoft SQL Server. The code came from many acquired companies including eshop and Interse. Future Development: Microsoft has discontinued production and support of Site Server. E-commerce functionality was moved into a new product called Microsoft Commerce Server. Document and content management features were mostly segregated into another product called Microsoft Content Management Server, which merged with SharePoint Server 2007 which today has two principle editions: Microsoft SharePoint Server 2019 and Microsoft SharePoint Online, part of the Office 365 services offering. Windows Versions: After the e-commerce technology was integrated, Site Server was sold in two editions: Standard, and Commerce. The Commerce Editions incorporated a hefty premium in their cost. 1996 Site Server 1.0 1997 Site Server 2.0 Site Server 2.0, Commerce Edition 1998 Site Server 3.0 Site Server 3.0, Commerce Edition
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Covered option** Covered option: A covered option is a financial transaction in which the holder of securities sells (or "writes") a type of financial options contract known as a "call" or a "put" against stock that they own or are shorting. The seller of a covered option receives compensation, or "premium", for this transaction, which can limit losses; however, the act of selling a covered option also limits their profit potential to the upside. One covered option is sold for every hundred shares the seller wishes to cover.A covered option constructed with a call is called a "covered call", while one constructed with a put is a "covered put". This strategy is generally considered conservative because the seller of a covered option reduces both their risk and their return. Characteristics: Covered calls are bullish by nature, while covered puts are bearish. The payoff from selling a covered call is identical to selling a short naked put. Both variants are a short implied volatility strategy.Covered calls can be sold at various levels of moneyness. Out-of-the-money covered calls have a higher potential for profit, but also protect against less risk, as compared to in-the-money covered calls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ExMark** ExMark: ExMark is a term describing the relationship between a fund's return and the market index. The usual designation for this concept is R-squared, but John C. Bogle coined this expression to highlight the difference with other financial products. ExMark: For a typical mainstream equity fund, the ExMark runs from 80% to 90%, meaning that an exceedingly high proportion of its total return is explained by the performance of the overall stock market. Only the remaining 20–10% of return is explained by some combination of (1) the fund's basic strategy and (2) the tactics and investment selections of the fund's portfolio manager. An ExMark below 80% indicates significantly less predictability of relative performance. A figure of 95% or above means that a fund's return has been shaped predominantly by the action of the stock market itself. Such a fund may be a closet index fund, charging high advisory fees but providing little opportunity to add value over and above the market's return. An index fund, the return of which is entirely explained by the action of the stock market, would of course carry an ExMark of 100%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gland** Gland: In animals, a gland is a group of cells in an animal's body that synthesizes substances (such as hormones) for release into the bloodstream (endocrine gland) or into cavities inside the body or its outer surface (exocrine gland). Structure: Development Every gland is formed by an ingrowth from an epithelial surface. This ingrowth may in the beginning possess a tubular structure, but in other instances glands may start as a solid column of cells which subsequently becomes tubulated.As growth proceeds, the column of cells may split or give off offshoots, in which case a compound gland is formed. In many glands, the number of branches is limited, in others (salivary, pancreas) a very large structure is finally formed by repeated growth and sub-division. As a rule, the branches do not unite with one another, but in one instance, the liver, this does occur when a reticulated compound gland is produced. In compound glands the more typical or secretory epithelium is found forming the terminal portion of each branch, and the uniting portions form ducts and are lined with a less modified type of epithelial cell.Glands are classified according to their shape. Structure: If the gland retains its shape as a tube throughout it is termed a tubular gland. In the second main variety of gland the secretory portion is enlarged and the lumens variously increased in size. These are termed alveolar or saccular glands. Types of glands: Glands are divided based on their function into two groups: Endocrine glands Endocrine glands secrete substances that circulate through the blood stream. The glands secrete their products through basal lamina into the blood stream. Basal lamina typically can be seen as a layer around the glands to which a million, maybe more, tiny blood vessels are attached. These glands often secrete hormones which play an important role in maintaining homeostasis. The pineal gland, thymus gland, pituitary gland, thyroid gland, and the two adrenal glands are all endocrine glands. Types of glands: Exocrine glands Exocrine glands secrete their products through a duct onto an outer or inner surface of the body, such as the skin or the gastrointestinal tract. Secretion is directly onto the apical surface. The glands in this group can be divided into three groups: Apocrine glands – a portion of the secreting cell's body is lost during secretion. 'Apocrine glands' is often used to refer to the apocrine sweat glands, however it is thought that apocrine sweat glands may not be true apocrine glands as they may not use the apocrine method of secretion, e.g. mammary gland, sweat gland of arm pit, pubic region, skin around anus, lips and nipples. Types of glands: Holocrine glands – the entire cell disintegrates to secrete its substances, e.g. sebaceous glands: meibomian and zeis glands. Merocrine glands – cells secrete their substances by exocytosis, e.g. mucous and serous glands; also called "eccrine", e.g. major sweat glands of humans, goblet cells, salivary gland, tear gland and intestinal glands.The type of secretory product of exocrine glands may also be one of three categories: Serous glands secrete a watery, often protein-rich, fluid-like product, e.g. sweat glands. Mucous glands secrete a viscous product, rich in carbohydrates (such as glycoproteins), e.g. goblet cells. Sebaceous glands secrete a lipid product. These glands are also known as oil glands, e.g. Fordyce spots and meibomian glands. Clinical significance: Adenosis is any disease of a gland. The diseased gland has abnormal formation or development of glandular tissue which is sometimes tumorous.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neuropoiesis** Neuropoiesis: Neuropoiesis is the process by which neural stem cells differentiate to form mature neurons, astrocytes, and oligodendrocytes in the adult mammal. This process is also referred to as adult neurogenesis. History: While rapid neurogenesis was known to occur in the early stages of life, the production and differentiation of neural stem cells was believed to cease upon maturity. This belief was overturned in the 1960s by the work of Joseph Altman. Using injections of thymidine-H3 to label the nuclei of dividing cells, Altman was able to use autoradiography to determine a neuronal birthdate for each cell in a rat's brain. This research revealed some degree of adult neurogenesis in the hippocampus and olfactory bulb of rats, and paved the way for the possibility of neuropoiesis in the adult mammalian brain. History: Following Altman's work, thymidine-H3 injections were used to examine the brains of a variety of other species. In the late 1970s Steven Goldman used this technique to examine the vocal control centers of songbirds, and he found widespread evidence of adult neurogenesis in this area of a canary's brain. Subsequent studies by Goldman and others revealed the precise mechanisms for neuronal cell differentiation and migration in adult songbirds, and, along with studies done in fish and other species, laid the groundwork for the study of neuropoiesis in humans. Neuropoietic areas in the human brain: The most recognized initial sites of neuropoiesis ending with neurons in adults are the subventricular zone (SVZ), the thin layer of cells just beneath the surface of the lateral ventricles of the brain, and the dentate gyrus of the hippocampus. Neural precursor cells in the human SVZ are known to yield offspring which can then produce glial cells or even migrate and form new neurons in specific areas such as the mammalian olfactory bulb. Mechanisms of neuropoiesis: Although the exact chemical signaling pathways which regulate Neuropoiesis are still poorly understood, there have been some recent advances in this field. Hematopoiesis, the differentiation of stem cells in the bone marrow to form blood cells, is a comparatively well studied phenomenon, and the study of hematopoiesis has yielded some insight into the mechanisms of neuropoiesis. Several developmental genes (e.g. Notch, Delta, neurogenin, neuregulin, OCT, Presenilin) growth factors (e.g. epidermal growth factor, NGF, brain-derived neurotrophic factor), and extracellular matrix proteins (e.g. tenascin, CD34) have been linked to the mechanisms of neuropoiesis. Many of these factors were originally identified as involved in hematopoiesis, but have since been found in neuropoetic cells in the SVZ. Research applications: While a full understanding of neuropoiesis is still some time away, there are numerous applications for this research. A complete understanding of the mechanisms for neural differentiation and proliferation could prove to be crucial to the treatment of neurodegenerative diseases. Towards this end, many researchers are attempting to control the differentiation and proliferation of neural stem cells in vivo by altering the expression of key genes such as presenilins and the sonic hedgehog pathway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oral mucocele** Oral mucocele: Oral mucocele (also mucous extravasation cyst, mucous cyst of the oral mucosa, and mucous retention and extravasation phenomena.) is a condition caused by two related phenomena - mucus extravasation phenomenon and mucous retention cyst. Mucous extravasation phenomenon is a swelling of connective tissue consisting of a collection of fluid called mucus. This occurs because of a ruptured salivary gland duct usually caused by local trauma (damage) in the case of mucous extravasation phenomenon and an obstructed or ruptured salivary duct in the case of a mucus retention cyst. The mucocele has a bluish, translucent color, and is more commonly found in children and young adults. Oral mucocele: Although these lesions are often called cysts, mucoceles are not true cysts because they have no epithelial lining. Rather, they are polyps. Signs and symptoms: The size of oral mucoceles vary from 1 mm to several centimeters and they usually are slightly transparent with a blue tinge. On palpation, mucoceles may appear fluctuant, but can also be firm. Their duration lasts from days to years, and may have recurrent swelling with occasional rupturing of its contents. Signs and symptoms: Locations The most common location to find a mucocele is the inner surface of the lower lip. It can also be found on the inner side of the cheek (known as the buccal mucosa), on the anterior ventral tongue, and the floor of the mouth. When found on the floor of the mouth, the mucocele is referred to as a ranula. They are rarely found on the upper lip. As their name suggests, they are basically mucus-lined cysts and they can also occur in the paranasal sinuses, most commonly the frontal sinuses, the frontoethmoidal region, and the maxillary sinus. Sphenoid sinus involvement is extremely rare. When the lumen of the vermiform appendix of the intestine gets blocked due to any factor, a mucocele can also form there. Signs and symptoms: Variations A variant of a mucocele is found on the palate, retromolar pad, and posterior buccal mucosa. Known as a "superficial mucocele", this type presents as single or multiple vesicles and bursts into an ulcer. Despite healing after a few days, superficial mucoceles recur often in the same location. Other causes of bumps inside lips are aphthous ulcer, lipoma, benign tumors of salivary glands, submucous abscesses, and haemangiomas. Diagnosis: Microscopically, mucoceles appears as granulation tissue surrounding mucin. Since inflammation occurs concurrently, neutrophils and foamy histiocytes usually are present. On a CT scan, a mucocele is fairly homogenous, with an attenuation of about 10-18 Hounsfield units. Classification Both mucous retention and extravasation phenomena are classified as salivary gland disorders. Treatment: Some mucoceles spontaneously resolve on their own after a short time. Others are chronic and require surgical removal. Recurrence is possible, thus the adjacent salivary gland may be excised as a preventive measure.Several types of procedures are available for the surgical removal of mucoceles. These include laser and minimally invasive techniques, which means recovery times are reduced drastically.Micromarsupialization is an alternative procedure to surgical removal. It uses silk sutures in the dome of a cyst to allow new epithelialized drainage pathways. It is simpler, less traumatic, and well tolerated by patients, especially children.A nonsurgical option that may be effective for a small or newly identified mucocele is to rinse the mouth thoroughly with salt water (one tablespoon of salt per cup) four to six times a day for a few days. This may draw out the fluid trapped underneath the skin without further damaging the surrounding tissue. If the mucocele persists, individuals should see a doctor to discuss further treatment. Smaller cysts may be removed by laser treatment, but larger cysts may have to be removed surgically.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exchange operator** Exchange operator: In quantum mechanics, the exchange operator P^ , also known as permutation operator, is a quantum mechanical operator that acts on states in Fock space. The exchange operator acts by switching the labels on any two identical particles described by the joint position quantum state |x1,x2⟩ . Since the particles are identical, the notion of exchange symmetry requires that the exchange operator be unitary. Construction: In three or higher dimensions, the exchange operator can represent a literal exchange of the positions of the pair of particles by motion of the particles in an adiabatic process, with all other particles held fixed. Such motion is often not carried out in practice. Rather, the operation is treated as a "what if" similar to a parity inversion or time reversal operation. Consider two repeated operations of such a particle exchange: P^2|x1,x2⟩=P^|x2,x1⟩=|x1,x2⟩ Therefore, P^ is not only unitary but also an operator square root of 1, which leaves the possibilities P^|x1,x2⟩=±|x2,x1⟩. Construction: Both signs are realized in nature. Particles satisfying the case of +1 are called bosons, and particles satisfying the case of −1 are called fermions. The spin–statistics theorem dictates that all particles with integer spin are bosons whereas all particles with half-integer spin are fermions. Construction: The exchange operator commutes with the Hamiltonian and is therefore a conserved quantity. Therefore, it is always possible and usually most convenient to choose a basis in which the states are eigenstates of the exchange operator. Such a state is either completely symmetric under exchange of all identical bosons or completely antisymmetric under exchange of all identical fermions of the system. To do so for fermions, for example, the antisymmetrizer builds such a completely antisymmetric state. Construction: In 2 dimensions, the adiabatic exchange of particles is not necessarily possible. Instead, the eigenvalues of the exchange operator may be complex phase factors (in which case P^ is not Hermitian), see anyon for this case. The exchange operator is not well defined in a strictly 1-dimensional system, though there are constructions of 1-dimensional networks that behave as effective 2-dimensional systems. Quantum chemistry: A modified exchange operator is defined in the Hartree–Fock method of quantum chemistry, in order to estimate the exchange energy arising from the exchange statistics described above. In this method, one often defines an energetic exchange operator as: 12 dx2 where K^j(x1) is the one-electron exchange operator, and f(x1) ,f(x2) are the one-electron wavefunctions acted upon by the exchange operator as functions of the electron positions, and ϕj(x1) and ϕj(x2) are the one-electron wavefunction of the j -th electron as functions of the positions of the electrons. Their separation is denoted 12 . The labels 1 and 2 are only for a notational convenience, since physically there is no way to keep track of "which electron is which".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kemp Clay** Kemp Clay: The Kemp Clay is a geologic formation in Texas. It preserves fossils dating back to the Cretaceous period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Octadecylphosphonic acid** Octadecylphosphonic acid: Octadecylphosphonic acid (C18H39O3P) is a chemical compound used in thermal paper for receipts, adding machines and tickets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microchimica Acta** Microchimica Acta: Microchimica Acta is a monthly peer-reviewed scientific journal published by Springer Nature. It was established in 1937 by Fritz Pregl. The editors-in-chief are Alberto Escarpa (University of Alcalá) and Mamas I. Prodromidis (University of Ioannina), who succeeded Otto S. Wolfbeis (University of Regensburg). The journal covers research on (bio)chemical analytical methods based on the use of micro- and nanomaterials. According to the Journal Citation Reports, the journal has a 2019 impact factor of 6.232.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strong gravity** Strong gravity: 'Strong gravity' is a non-mainstream theoretical approach to particle confinement having both a cosmological scale and a particle scale gravity. In the 1960s, it was taken up as an alternative to the then young QCD theory by several theorists, including Abdus Salam, who showed that the particle level gravity approach can produce confinement and asymptotic freedom while not requiring a force behavior differing from an inverse-square law, as does QCD. Sivaram published a review of this bimetric theory approach.Although this approach has not so far led to a recognizably successful unification of strong and other forces, the modern approach of string theory is characterized by a close association between gauge forces and spacetime geometry. In some cases, string theory recognizes important duality between gravity-like and QCD-like theories, most notably the AdS/QCD correspondence. Strong gravity: The concept of strong gravity follows from applying the potential gravitational energy to the term of heat in the equation of the first law of thermodynamics ( E=Q+W ), where the total energy is mass-energy and the work is also the kinetic energy: mc2=kT+EK , becomes mc2=Gmsmr+EK
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GNB3** GNB3: Guanine nucleotide-binding protein G(I)/G(S)/G(T) subunit beta-3 is a protein that in humans is encoded by the GNB3 gene.Heterotrimeric guanine nucleotide-binding proteins ( G proteins), which integrate signals between receptors and effector proteins, are composed of an alpha, a beta, and a gamma subunit. These subunits are encoded by families of related genes. This gene encodes a beta subunit. Beta subunits are important regulators of alpha subunits, as well as of certain signal transduction receptors and effectors. A single-nucleotide polymorphism (C825T) in this gene is associated with essential hypertension and obesity. This polymorphism is also associated with the occurrence of the splice variant GNB3-s, which appears to have increased activity. GNB3-s is an example of alternative splicing caused by a nucleotide change outside of the splice donor and acceptor sites. Additional splice variants may exist for this gene, but they have not been fully described.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Holomatix Rendition** Holomatix Rendition: Holomatix Rendition is a discontinued raytracing renderer, which is broadly compatible with mental ray. Its rendering method is similar to that of FPrime in that it displays a continuously refined rendered image until final production quality image is achieved. This differs to traditional rendering methods where the rendered image is built up block-by-block. It was developed by Holomatix Ltd, based in London, UK. As of December 2011, the Rendition product has been retired and is no longer available or being updated. The product is no longer mentioned on the developer's web site, either. The successor is SprayTrace. Features: Realtime (or progressive) rendering engine Based on mental ray shader and lighting model Supports 3rd-party shaders compiled for mental ray Rendering Features: As it uses the same shader and lighting model as mental ray, Rendition supports the same rendering and ray tracing features as mental ray, including: Final Gather Global Illumination (through Photon Mapping) Polygonal and Parametric Surfaces (NURBS, Subdivision) Displacement Mapping Motion Blur Lens Shaders Supported platforms: Autodesk Maya, up to and including 2011 SAP Autodesk 3ds Max, up to and including 2010 Softimage|XSI, up to and including 2011, but not 2011 SP1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Computer Contradictionary** The Computer Contradictionary: The Computer Contradictionary is a non-fiction book by Stan Kelly-Bootle that compiles a satirical list of definitions of computer industry terms. It is an example of "cynical lexicography" in the tradition of Ambrose Bierce's The Devil's Dictionary. Rather than offering a factual account of usage, its definitions are largely made up by the author.The book was published in May 1995 by MIT Press and is an update of Kelly-Bootle's The Devil's DP Dictionary which appeared in 1981. Examples: Endless loop. See: Loop, endlessLoop, endless. See: Endless loopRecursion. See: Recursion Reception: The Los Angeles Times panned the book, wrote that it was "smartly-titled" but was an "awfully stupid book". ACM Computing Reviews recommended dipping into it because "a dictionary is a difficult read".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Concealed carry in the United States** Concealed carry in the United States: Concealed carry, or carrying a concealed weapon (CCW), is the practice of carrying a weapon (such as a handgun) in public in a concealed manner, either on one's person or in close proximity. CCW is often practiced as a means of self-defense. Every state in the United States allows for concealed carry of a handgun either permitless or with a permit, although the difficulty in obtaining a permit varies per jurisdiction. Concealed carry in the United States: There is conflicting evidence regarding the effect that concealed carry has on crime rates. A 2020 review by the RAND Corporation concluded there is supportive evidence that shall-issue concealed carry laws, which require states to issue permits to applicants once certain requirements are met, are associated with increased firearm homicides and total homicides. Earlier studies by RAND found that shall-issue concealed carry laws may increase violent crime overall, while there was inconclusive evidence for the effect of shall-issue laws on all individual types of violent crime. A 2004 literature review by the National Academy of Sciences concluded that there is no link between the existence of laws that allow concealed carry and crime rates. History: The Second Amendment to the United States Constitution guarantees the right to "keep and bear arms". Concealed weapons bans were passed in Kentucky and Louisiana in 1813. (In those days open carry of weapons for self-defense was considered acceptable; concealed carry was denounced as the practice of criminals.) By 1859, Indiana, Tennessee, Virginia, Alabama, and Ohio had followed suit. By the end of the nineteenth century, similar laws were passed in places such as Texas, Florida, and Oklahoma, which protected some gun rights in their state constitutions. Before the mid-1900s, most U.S. states had passed concealed carry laws rather than banning weapons completely. Until the late 1990s, many Southern states were either "No-Issue" or "Restrictive May-Issue". Since then, these states have largely enacted "Shall-Issue" licensing laws, with more than half of the states legalizing "Constitutional carry" (unrestricted concealed carry) and the remaining "May-issue" licensing laws being abolished in 2022 by the U.S. Supreme Court. State laws: Permitting policies Unrestricted jurisdiction: one in which a permit is not required to carry a concealed handgun. All states in this category allow any non-prohibited person to carry regardless of state of residency. State laws: Permit requirement jurisdiction: one in which a permit is required to carry a concealed handgun.Historically, some states were considered "may-issue" jurisdictions where an applicant was required to provide a proper cause or need to be issued a permit to carry a concealed weapon. However, on June 23, 2022, these laws were found unconstitutional by the U.S. Supreme Court in New York State Rifle & Pistol Association, Inc. v. Bruen. State laws: Regulations differ widely by state, with twenty-seven of the fifty states either currently maintaining a constitutional carry policy or implementing it in the near future. State laws: The Federal Gun-Free School Zones Act limits where an unlicensed person may carry; carry of a weapon, openly or concealed, within 1,000 feet (300 m) of a school zone is prohibited, with exceptions granted in the federal law to holders of valid state-issued weapons permits (state laws may reassert the illegality of school zone carry by license holders), and under LEOSA to current and honorably retired law enforcement officers (regardless of permit, usually trumping state law). State laws: When in contact with an officer, some states require individuals to inform that officer that they are carrying a handgun.Not all weapons that fall under CCW laws are lethal. For example, in Florida, carrying pepper spray in more than a specified volume (2 oz.) of chemical requires a CCW permit, whereas everyone may legally carry a smaller, “self-defense chemical spray” device hidden on their person without a CCW permit. As of 2021 there have been 21.52 million concealed weapon permits issued in the United States. State laws: * Jurisdiction gives no minimum age to conceal carry in law. Age set at 18 by federal law. State laws: Unrestricted jurisdictions An unrestricted jurisdiction is one in which a permit is not required to carry a concealed handgun. This is sometimes called constitutional carry. Within the unrestricted category, there exist states that are fully unrestricted, where no permit is required for lawful open or concealed carry, and partially unrestricted, where certain forms of concealed carry may be legal without a permit, while other forms of carry may require a permit. State laws: Some states have a limited form of permitless carry, restricted based on one or more of the following: a person's location, the loaded/unloaded state of the firearm, or the specific persons who may carry without a permit. As of February 18, 2021, these states are Illinois, New Mexico, and Washington. Some states that allow permitless concealed carry and still issue concealed carry permits may impose restrictions on concealed carry for certain places and/or at certain times (e.g., special events, large public gatherings, etc.). In some such situations, those holding a valid concealed carry permit may be exempt from such restrictions. State laws: Permit requirement jurisdictions A permit requirement jurisdiction is one in which a government-issued permit is required to carry a concealed handgun in public. Before the U.S. Supreme Court ruling in New York State Rifle & Pistol Association, Inc. v. Bruen, these jurisdictions were further split between "shall-issue", which is the current national licensing standard where the granting of licenses is subject only to meeting determinate criteria laid out in the law, and "may-issue" where the granting of such licenses was at the discretion of local authorities. Since the abolishment of "may-issue" permitting the U.S. Supreme Court has stated it is still legal for U.S. jurisdictions subject to the Constitution to require a permit to carry a concealed handgun, and that background checks, training, and proper fees can be required without violating the Second Amendment to the United States Constitution which guarantees a right of the people to carry a concealed firearm outside of the home. State laws: Concealed carry on U.S. military installations While members of the Armed Services may receive extensive small arms training, United States Military installations have some of the most restrictive rules for the possession, transport, and carrying of personally-owned firearms in the country. State laws: Overall authority for carrying a personally-owned firearm on a military installation rests with the installation commander, although the authority to permit individuals to carry firearms on an installation is usually delegated to the Provost Marshal. Military installations do not recognize state-issued concealed carry permits, and state firearms laws generally do not apply to military bases, regardless of the state in which the installation is located. Federal law (18 USC, Section 930) generally forbids the possession, transport, and carrying of firearms on military installations without approval from the installation commander. Federal law gives installation commanders wide discretion in establishing firearms policies for their respective installations. In practice, local discretion is often constrained by policies and directives from the headquarters of each military branch and major commands. State laws: Installation policies can vary from no-issue for most bases to shall-issue in rare circumstances. Installations that do allow the carrying of firearms typically restrict carrying to designated areas and for specific purposes (i.e., hunting or officially sanctioned shooting competitions in approved locations on the installation). Installation commanders may require the applicant complete extensive firearms safety training, undergo a mental health evaluation, and obtain a letter of recommendation from their unit commander (or employer) before such authorization is granted. Personnel that reside on a military installation are typically required to store their personally-owned firearms in the installation armory, although the installation commander or provost marshal may permit a service member to store their personal firearms in their on-base dwelling if they have a gun safe or similarly designed cabinet where the firearms can be secured. State laws: Prior to 2011, military commanders could impose firearms restrictions to servicemembers residing off-base, such as mandatory registration of firearms with the base provost marshal, restricting or banning the carrying of firearms by servicemembers either on or off the installation regardless of whether the member had a state permit to carry, and requiring servicemembers to have a gun safe or similar container to secure firearms when not in use. A provision was included in the National Defense Authorization Act for Fiscal Year 2011 that limited commanders' authority to impose restrictions on the possession and use of personally-owned firearms by service members who reside off-base. State laws: Concealed carry on Native American reservations Concealed carry policies on Native American reservations are covered by the tribal laws for each reservation, which vary widely from "No-Issue" to "Shall-Issue" and "Unrestricted" either in law or in practice. Some Native American tribes recognize concealed carry permits for the state(s) in which the reservation is located, while others do not. For reservations that do not recognize state-issued concealed carry permits, some completely ban concealed carry, while others offer a tribal permit for concealed carry issued by the tribal police or tribal council. Tribal concealed carry permits may be available to the general populace or limited to tribal members, depending on tribal policies. Tribal law typically pre-empts state law on the reservation. The only exception is while traversing the reservation on a state-owned highway (including interstate, U.S. routes, and in some instances county roads), in which case state law and the federal Firearm Owners' Protection Act (FOPA) apply. State laws: Limitations on concealed carry Prohibitions of the concealed carry of firearms and other weapons by local governments predate the establishment of the United States. In 1686, New Jersey law stated “no person or persons … shall presume privately to wear any pocket pistol … or other unusual or unlawful weapons within this Province.” After the federal government was established, states and localities continued to restrict people from carrying hidden weapons. Tennessee law prohibited this as early as 1821. By 1837, Georgia passed into effect “An Act to guard and protect the citizens of this State, against the unwarrantable and too prevalent use of deadly weapons.” Two years later, Alabama followed suit with “An Act to Suppress the Evil Practice of Carrying Weapons Secretly.” Delaware prohibited the practice in 1852. Ohio did the same in 1859, a policy that remained in effect until 1974. Cities also regulated weapons within their boundaries. In 1881, Tombstone, Arizona, enacted Ordinance No. 9 "To Provide against Carrying of Deadly Weapons", a regulation that sparked the Gunfight at the O.K. Corral later that year. State laws: Some permit requirement jurisdictions allow issuing authorities to impose limitations on CCW permits, such as the type and caliber of handguns that may be carried (Rhode Island, New Mexico), restrictions on places where the permit is valid (New York, Massachusetts, Illinois), restricting concealed carry to purposes or activities specified on the approved permit application (California, Massachusetts, New Jersey, New York), limitations on magazine size (Connecticut, Massachusetts, New York), or limitations on the number of firearms that may be carried concealed by a permit-holder at any given time (some states). Permits issued by all but two states (New York and Hawaii) are valid statewide. New York State pistol licenses, which are generally issued by counties, are valid statewide with one exception. A permit not issued by New York City is invalid in that city, unless validated by its police commissioner. Permits issued by Hawaii are valid only in the county of issuance. State laws: Training requirements Some states require concealed carry applicants to certify their proficiency with a firearm through some type of training or instruction. Certain training courses developed by the National Rifle Association that combine classroom and live-fire instruction typically meet most state training requirements. Some states recognize prior military or police service as meeting training requirements.Classroom instruction would typically include firearm mechanics and terminology, cleaning and maintenance of a firearm, concealed carry legislation and limitations, liability issues, carry methods and safety, home defense, methods for managing and defusing confrontational situations, and practice of gun handling techniques without firing the weapon. Most required CCW training courses devote a considerable amount of time to liability issues. State laws: Depending on the state, a practical component during which the attendee shoots the weapon for the purpose of demonstrating safety and proficiency, may be required. During range instruction, applicants would typically learn and demonstrate safe handling and operation of a firearm and accurate shooting from common self-defense distances. Some states require a certain proficiency to receive a passing grade, whereas other states (e.g., Florida) technically require only a single-shot be fired to demonstrate handgun handling proficiency. State laws: CCW training courses are typically completed in a single day and are good for a set period, the exact duration varying by state. Some states require re-training, sometimes in a shorter, simpler format, for each renewal. State laws: A few states, e.g., South Carolina, recognize the safety and use-of-force training given to military personnel as acceptable in lieu of formal civilian training certification. Such states will ask for a military ID (South Carolina) for active persons or DD214 for honorably discharged persons. These few states will commonly request a copy of the applicant's BTR (Basic Training Record) proving an up-to-date pistol qualification. Active and retired law enforcement officers are generally exempt from qualification requirements, due to a federal statute permitting qualified active and retired law enforcement officers to carry concealed weapons in the United States.Virginia recognizes eight specific training options to prove competency in handgun handling, ranging from DD214 for honorably discharged military veterans, to certification from law enforcement training, to firearms training conducted by a state or NRA certified firearms instructor including electronic, video, or on-line courses. While any one of the eight listed options will be considered adequate proof, individual circuit courts may recognize other training options. A small number of states, such as Alabama and Georgia, have no training requirements to obtain a permit—only a requirement that the applicant successfully pass the required background check before issuance. State laws: Reciprocity Many jurisdictions recognize (honor) a permit or license issued by other jurisdictions. Recognition may be granted to all jurisdictions or some subset which meets a set of permit-issuing criteria, such as training comparable to the honoring jurisdiction or certain background checks. Several states have entered into formal agreements to mutually recognize permits. This arrangement is commonly called reciprocity or mutual recognition. A few states do not recognize permits issued by any other jurisdiction, but offer non-resident permits for out-of-state individuals (who possess a valid concealed carry permit from their home state) who wish to carry while visiting such states. There are also states that neither recognize out-of-state concealed carry permits nor issue permits to non-residents, resulting in a complete ban on concealed carry by non-residents in such states. There are also states (Illinois and Rhode Island) that do not recognize out-of-state permits for carry on-foot, but do permit individuals with out-of-state concealed carry permits to carry while traveling in their vehicle (normally in accordance with the rules of the state of issuance). State laws: Recognition and reciprocity of concealed carry privileges varies. Some states (e.g. Indiana, Virginia, Ohio) unilaterally recognize all permits. Others such as Michigan, limit such universal recognition to residents of the permit-issuing state. While 37 states have reciprocity agreements with at least one other state and several states honor all out-of-state concealed carry permits, some states have special requirements like training courses or safety exams, and therefore do not honor permits from states that do not have such requirements for issue. Some states make exceptions for persons under the minimum age (usually 21) if they are active or honorably-discharged members of the military or a police force (the second of these two is subject additionally to federal law). States that do not have this exemption generally do not recognize any license from states that do. An example of this is the state of Washington's refusal to honor any Texas LTC as Texas has the military exception to age. Idaho, Mississippi, North Dakota, South Dakota, and Tennessee have standard and enhanced permits that have different requirements to obtain and also have unique reciprocity with different states; Utah and West Virginia have provisional permits for 18-20 year olds with more limited recognition by other states.Permits from Idaho (enhanced), Kansas, Michigan, North Dakota (class 1), and North Carolina have the highest number of recognition by other states (39 states). One can obtain multiple state permits in an effort to increase the number of states where that user can carry a legally concealed weapon. It is common practice to use a CCW Reciprocity Map to gain clarity on which states will honor the person's combination of resident and non-resident permits given the variety of standards and legal policies from state to state. There are also various mobile applications that guide users in researching state concealed carry permit reciprocity. State laws: Although carry may be legal under State law in accordance with reciprocity agreements, the Federal Gun Free School Zones Act subjects an out-of-state permit holder to federal felony prosecution if they carry a firearm within 1000 feet of any K–12 school's property line; however, the enforcement of this statute is rare given several states' nullification statutes prohibiting state law enforcement officers from enforcing federal firearms laws. However, states may have their own similar statutes that such officers will enforce, and potentially expose the carrier to later prosecution under the Act. State laws: Restricted premises While generally a concealed carry permit allows the permit holder to carry a concealed weapon in public, a state may restrict carry of a firearm including a permitted concealed weapon while in or on certain properties, facilities or types of businesses that are otherwise open to the public. These areas vary by state (except for the first item below; federal offices are subject to superseding federal law) and can include: Federal government facilities, including post offices, IRS offices, federal court buildings, military/VA facilities and/or correctional facilities, Amtrak trains and facilities, and Corps of Engineers-controlled property (carry in these places is prohibited by federal law and preempts any existing state law). Carry on land controlled by the Bureau of Land Management (federal parks and wildlife preserves) is allowed by federal law as of the 2009 CARD Act, but is still subject to state law. However, carry into restrooms or any other buildings or structures located within federal parks is illegal in the United States, despite concealed carry being otherwise legal in federal parks with a permit recognized by the state in which the federal park is located. Similarly, concealed carry into caves located within federal parks is illegal. State laws: State and local government facilities, including courthouses, DMV/DoT offices, police stations, correctional facilities, and/or meeting places of government entities (exceptions may be made for certain persons working in these facilities such as judges, lawyers, and certain government officials both elected and appointed) Venues for political events, including rallies, parades, debates, and/or polling places Educational institutions including elementary/secondary schools and colleges. Some states have "drop-off exceptions" which only prohibit carry inside school buildings, or permit carry while inside a personal vehicle on school property. Campus carry laws vary by state. State laws: Public interscholastic and/or professional sporting events and/or venues (sometimes only during a time window surrounding such an event) Amusement parks, fairs, parades and/or carnivals Businesses that sell alcohol (sometimes only "by-the-drink" sellers like restaurants, sometimes only establishments defined as a "bar" or "nightclub", or establishments where the percentage of total sales from alcoholic beverages exceeds a specified threshold) Hospitals (even if hospitals themselves are not restricted, "teaching hospitals" partnered with a medical school are sometimes considered "educational institutions"; exceptions are sometimes made for medical professionals working in these facilities) Churches, mosques and other "Houses of worship", usually at the discretion of the church clergy (Ohio allows with specific permission of house of worship) Municipal mass transit vehicles or facilities Sterile areas of airports (sections of the airport located beyond security screening checkpoints, unless explicitly authorized) Non-government facilities with heightened security measures (Nuclear facilities, power plants, dams, oil and gas production facilities, banks, factories, unless explicitly authorized) Aboard aircraft or ships unless specifically authorized by the pilot in command or ship captain Private property where the lawful owner or lessee has posted a sign or verbally stated that firearms are not permitted Any public place, while under the influence of alcohol or drugs (including certain prescription or over-the-counter medications, depending on jurisdiction) "Opt-out" statutes ("gun-free zones") Some states allow private businesses to post a specific sign prohibiting concealed carry within their premises. The exact language and format of such a sign varies by state. By posting the signs, businesses create areas where it is illegal to carry a concealed handgun—similar to regulations concerning schools, hospitals, and public gatherings. State laws: Violation of such a sign, in some of these states, is grounds for revocation of the offender's concealed carry permit and criminal prosecution. Other states, such as Virginia, enforce only trespassing laws when a person violates a "Gun Free Zone" sign. In some jurisdictions trespass by a person carrying a firearm may have more severe penalties than "simple" trespass, while in other jurisdictions, penalties are lower than for trespass.Such states include: Arizona, Arkansas, Connecticut, Illinois, Kansas, Louisiana, Michigan, Minnesota, Missouri, Nebraska, Nevada, New Mexico, North Carolina, Ohio, Oklahoma, South Carolina, Tennessee, Texas, Virginia, and Wisconsin. State laws: There is considerable dispute over the effectiveness of such "gun-free zones". Opponents of such measures, such as OpenCarry.org, state that, much like other malum prohibitum laws banning gun-related practices, only law-abiding individuals will heed the signage and disarm. Individuals or groups intent on committing far more serious crimes, such as armed robbery or murder, will not be deterred by signage prohibiting weapons. Further, the reasoning follows that those wishing to commit mass murder might intentionally choose gun-free venues like shopping malls, schools and churches (where weapons carry is generally prohibited by statute or signage) because the population inside is disarmed and thus less able to stop them.In some states, business owners have been documented posting signs that appear to prohibit guns, but legally do not because the signs do not meet local or state laws defining required appearance, placement, or wording of signage. Such signage can be posted out of ignorance to the law, or intent to pacify gun control advocates while not actually prohibiting the practice. The force of law behind a non-compliant sign varies based on state statutes and case law. Some states interpret their statutes' high level of specification of signage as evidence that the signage must meet the specification exactly, and any quantifiable deviation from the statute makes the sign non-binding. Other states have decided in case law that if efforts were made in good faith to conform to the statutes, the sign carries the force of law even if it fails to meet current specification. Still others have such lax descriptions of what is a valid sign that virtually any sign that can be interpreted as "no guns allowed" is binding on the license holder.Note that virtually all jurisdictions allow some form of oral communication by the lawful owner or controller of the property that a person is not welcome and should leave. This notice can be given to anyone for any reason (except for statuses that are protected by the Federal Civil Rights Act of 1964 and other CRAs, such as race), including due to the carrying of firearms by that person, and refusal to heed such a request to leave may constitute trespassing. State laws: Brandishing and printing Printing refers to a circumstance where the shape or outline of a firearm is visible through a garment while the gun is still fully covered, and is generally not desired when carrying a concealed weapon. Brandishing can refer to different actions depending on jurisdiction. These actions can include printing through a garment, pulling back clothing to expose a gun, or unholstering a gun and exhibiting it in the hand. The intent to intimidate or threaten someone may or may not be required legally for it to be considered brandishing. State laws: Brandishing is a crime in most jurisdictions, but the definition of brandishing varies widely. State laws: Under California law, the following conditions have to be present to prove brandishing: [1] A person, in the presence of another person, drew or exhibited a [deadly weapon, other than a firearm] [firearm, whether loaded or unloaded]; [and] [2] That person did so in a rude, angry, or threatening manner [or] [2] That person, in any manner, unlawfully used the [deadly weapon] [firearm] in a fight or quarrel [.] [; and [3] The person was not acting in lawful self-defense.] In Virginia law: It shall be unlawful for any person to point, hold or brandish any firearm or any air or gas operated weapon or any object similar in appearance, whether capable of being fired or not, in such manner as to reasonably induce fear in the mind of another or hold a firearm or any air or gas operated weapon in a public place in such a manner as to reasonably induce fear in the mind of another of being shot or injured. However, this section shall not apply to any person engaged in excusable or justifiable self-defense. Federal law: Gun Control Act of 1968 The Gun Control Act passed by Congress in 1968 lists felons, illegal aliens, and other codified persons as prohibited from purchasing or possessing firearms. During the application process for concealed carry, states carry out thorough background checks to prevent these individuals from obtaining permits. Additionally the Brady Handgun Violence Prevention Act created an FBI maintained system in 1994 for instantly checking the backgrounds of potential firearms buyers in an effort to prevent these individuals from obtaining weapons. Federal law: Firearm Owners Protection Act The Firearm Owners Protection Act (FOPA) of 1986 allows a gun owner to travel through states in which their firearm possession is illegal as long as it is legal in the states of origination and destination, the owner is in transit and does not remain in the state in which firearm possession is illegal, and the firearm is transported unloaded and in a locked container. The FOPA addresses the issue of transport of private firearms from origin to destination for purposes lawful in state of origin and destination; FOPA does not authorize concealed carry as a weapon of defense during transit. New York State Police arrested those carrying firearms in violation of state law, and then required them to use FOPA as an affirmative defense to the charges of illegal possession. Federal law: Law Enforcement Officers Safety Act In 2004, the United States Congress enacted the Law Enforcement Officers Safety Act, 18 U.S. Code 926B and 926C. This federal law allows two classes of persons – the "qualified law enforcement officer" and the "qualified retired law enforcement officer" – to carry a concealed firearm in any jurisdiction in the United States, regardless of any state or local law to the contrary, with the exception of areas where all firearms are prohibited without permission, and certain Title II weapons. Federal law: Federal Gun Free School Zones Act The Federal Gun Free School Zone Act limits where a person may legally carry a firearm. It does this by making it generally unlawful for an armed citizen to be within 1,000 feet (extending out from the property lines) of a place that the individual knows, or has reasonable cause to believe, is a K–12 school. Although a state-issued carry permit may exempt a person from this restriction in the state that physically issued their permit, it does not exempt them in other states which recognize their permit under reciprocity agreements made with the issuing state. Federal law: Federal property Some federal statutes restrict the carrying of firearms on the premises of certain federal properties such as military installations or land controlled by the USACE. Federal law: National park carry On May 22, 2009, President Barack Obama signed H.R. 627, the "Credit Card Accountability Responsibility and Disclosure Act of 2009", into law. The bill contained a rider introduced by Senator Tom Coburn (R-OK) that prohibits the Secretary of the Interior from enacting or enforcing any regulations that restrict possession of firearms in National Parks or Wildlife Refuges, as long as the person complies with laws of the state in which the unit is found. This provision was supported by the National Rifle Association and opposed by the Brady Campaign to Prevent Gun Violence, the National Parks Conservation Association, and the Coalition of National Park Service Retirees, among other organizations. As of February 2010 concealed handguns are for the first time legal in all but 3 of the nation's 391 national parks and wildlife refuges so long as all applicable federal, state, and local regulations are adhered to. Hawaii is a notable exception. Concealed and open carry are both illegal in Hawaii for all except retired military or law enforcement personnel. Previously firearms were allowed into parks if cased and unloaded. Federal law: Full faith and credit (CCW permits) Attempts were made in the 110th Congress, United States House of Representatives (H.R. 226) and the United States Senate (S. 388), to enact legislation to compel complete reciprocity for concealed carry licenses. Opponents of national reciprocity have pointed out that this legislation would effectively require states with more restrictive standards of permit issuance (e.g., training courses, safety exams, "good cause" requirements, et al.) to honor permits from states with more liberal issuance policies. Supporters have pointed out that the same situation already occurs with marriage certificates, adoption decrees and other state documents under the "full faith and credit" clause of the U.S. Constitution. Some states have already adopted a "full faith and credit" policy treating out-of-state carry permits the same as out-of-state driver's license or marriage certificates without federal legislation mandating such a policy. In the 115th Congress, another universal reciprocity bill, the Concealed Carry Reciprocity Act of 2017, was introduced by Richard Hudson. The bill passed the House but did not get a vote in the Senate. Legal issues: Court rulings Prior to the 1897 Supreme Court case Robertson v. Baldwin, the federal courts had been silent on the issue of concealed carry. In the dicta from a maritime law case, the Supreme Court commented that state laws restricting concealed weapons do not infringe upon the right to bear arms protected by the federal Second Amendment. However, in the context of such rulings, open carry of firearms was generally unrestricted in the jurisdictions in question, which provided an alternative means of "bearing" arms. Legal issues: In the majority decision in the 2008 Supreme Court case of District of Columbia v. Heller, Justice Antonin Scalia wrote; Like most rights, the Second Amendment right is not unlimited. It is not a right to keep and carry any weapon whatsoever in any manner whatsoever and for whatever purpose: For example, concealed weapons prohibitions have been upheld under the Amendment or state analogues ... The majority of the 19th-century courts to consider the question held that prohibitions on carrying concealed weapons were lawful under the Second Amendment or state analogues. Legal issues: Heller was a landmark case because for the first time in United States history a Supreme Court decision defined the right to bear arms as constitutionally guaranteed to private citizens rather than a right restricted to "well regulated militia[s]". The Justices asserted that sensible restrictions on the right to bear arms are constitutional, however, an outright ban on a specific type of firearm, in this case handguns, was in fact unconstitutional. The Heller decision is limited because it only applies to federal enclaves such as the District of Columbia. In 2010, the SCOTUS expanded Heller in McDonald v. Chicago incorporating the 2nd Amendment through the 14th Amendment as applying to local and state laws. Various Circuit Courts have upheld their local and state laws using intermediate scrutiny. The correct standard is strict scrutiny review for all "fundamental" and "individual" rights. On June 28, 2010, the U.S. Supreme Court struck down the handgun ban enacted by the city of Chicago, Illinois, in McDonald v. Chicago, effectively extending the Heller decision to states and local governments nationwide. Banning handguns in any jurisdiction has the effect of rendering invalid any licensed individual's right to carry concealed in that area except for federally exempted retired and current law enforcement officers and other government employees acting in the discharge of their official duties. Legal issues: In 2022, the Supreme Court ruled in New York State Rifle & Pistol Association, Inc. v. Bruen, that the Second Amendment does protect "an individual's right to carry a handgun for self-defense outside the home." The case struck down New York's strict law requiring people to show "proper cause" in order to get a concealed weapons permit, and could affect similar laws in other states such as California, Hawaii, Maryland, Massachusetts, New Jersey, and Rhode Island. Shortly after the Supreme Court ruling, the attorney generals of each of California, Hawaii (concealed carry licenses only), Maryland, Massachusetts, New Jersey, and Rhode Island (permits issued by municipalities only) issued guidance that their "proper cause" or similar requirements would no longer be enforced. Legal issues: Legal liability Even when self-defense is justified, there can be serious civil or criminal liabilities related to self-defense when a concealed carry permit holder brandishes or fires their weapon. For example, if innocent bystanders are hurt or killed, there could be both civil and criminal liabilities even if the use of deadly force was completely justified. Some states technically allow an assailant who is shot by a gun owner to bring civil action. In some states, liability is present when a resident brandishes the weapon, threatens use, or exacerbates a volatile situation, or when the resident is carrying while intoxicated. It is important to note that simply pointing a firearm at any person constitutes felony assault with a deadly weapon unless circumstances validate a demonstration of force. A majority of states who allow concealed carry, however, forbid suits being brought in such cases, either by barring lawsuits for damages resulting from a criminal act on the part of the plaintiff, or by granting the gun owner immunity from such a civil suit if it is found that they were justified in shooting. Legal issues: Simultaneously, increased passage of "Castle Doctrine" laws allow persons who own firearms and/or carry them concealed to use them without first attempting to retreat. The "Castle Doctrine" typically applies to situations within the confines of one's own home. Nevertheless, many states have adopted escalation of force laws along with provisions for concealed carry. These include the necessity to first verbally warn a trespasser or lay hands on a trespasser before a shooting is justified (unless the trespasser is armed or assumed to be so). This escalation of force does not apply if the shooter reasonably believes a violent felony has been or is about to be committed on the property by the trespasser. Additionally some states have a duty to retreat provision which requires a permit holder, especially in public places, to vacate themself from a potentially dangerous situation before resorting to deadly force. The duty to retreat does not restrictively apply in a person's home or business though escalation of force may be required. In 1895 the Supreme Court ruled in Beard v. United States that if an individual does not provoke an assault and is residing in a place they have a right to be, then they may use considerable force against someone they reasonably believe may do them serious harm without being charged with murder or manslaughter should that person be killed. Further, in Texas and California homicide is justifiable solely in defense of property. In other states, lethal force is authorized only when serious harm is presumed to be imminent. Legal issues: Even given these relaxed restrictions on use of force, using a handgun must still be a last resort in some jurisdictions; meaning the user must reasonably believe that nothing short of deadly force will protect the life or property at stake in a situation. Additionally, civil liabilities for errors that cause harm to others still exist, although civil immunity is provided in the Castle Doctrine laws of some states (e.g., Texas). Legal issues: Penalties for carrying illegally Criminal possession of a weapon is the unlawful possession of a weapon by a citizen. Many societies both past and present have placed restrictions on what forms of weaponry private citizens (and to a lesser extent police) are allowed to purchase, own, and carry in public. Such crimes are public order crimes and are considered mala prohibita, in that the possession of a weapon in and of itself is not evil. Rather, the potential for use in acts of unlawful violence creates a possible need to control them. Some restrictions are strict liability, whereas others require some element of intent to use the weapon for an illegal purpose. Some regulations allow a citizen to obtain a permit or other authorization to possess the weapon under certain circumstances. Lawful uses of weapons by civilians commonly include hunting, sport, collection and self-preservation. Legal issues: The penalties for carrying a firearm in an unlawful manner varies widely from state-to-state, and may range from a simple infraction punishable by a fine to a felony conviction and mandatory incarceration. An individual may also be charged and convicted of criminal charges other than unlawful possession of a firearm, such as assault, disorderly conduct, disturbing the peace, or trespassing. In the case of an individual with no prior criminal convictions, the state of Tennessee classifies the unlawful concealed carry of a loaded handgun as a Class C misdemeanor punishable by a maximum of 30 days imprisonment and/or a $500 fine. While in New York State, a similar crime committed by an individual with no criminal convictions is classified as a Class D felony, punishable by a mandatory minimum of 3.5 years imprisonment, to a maximum of 7 years. As New York State does not recognize any pistol permits issued in other states, the statute would apply to any individual who does not have a valid New York State issued concealed carry permit, even if such individual has a valid permit issued in another jurisdiction. In addition, the New York State statutory definition of a "loaded firearm" differs significantly from what may be commonly understood, as simply possessing any ammunition along with a weapon capable of firing such ammunition satisfies the legal definition of a loaded firearm in New York. The large variability of state carry laws has resulted in confusing circumstances where a person in Vermont (which requires no license of any kind to carry a concealed weapon by anyone who is not prohibited by law), could unwittingly travel into the adjacent state of New York, where such individual, despite acting entirely within the law of Vermont, would then face a mandatory 3.5 year prison sentence simply for accidentally crossing the state's border into New York. These circumstances are aggravated by the fact that many NYS police departments as well as the New York State Police do not recognize the protections granted federally under the Firearm Owners Protection Act, which was intended to prevent such prosecutions. Effect on crime and deaths: Research has had mixed results, indicating variously that right-to-carry laws have no impact on violent crime, that they increase violent crime, and that they decrease violent crime. Effect on crime and deaths: A 2020 study by the RAND Corporation of over 200 combinations of gun policies and outcomes found "supportive evidence that shall-issue concealed carry laws are associated with increased firearm homicides and total homicides". They also found supportive evidence that child access prevention laws reduce firearm homicides and self-injuries among youth, and further evidence supporting the conclusion that stand-your-ground laws are associated with increased levels of firearm homicides. The researchers credit a much greater investment in gun safety research over the past few years with providing this and other more recent studies with stronger and more reliable evidence. A 2004 review of the existing literature by the National Academy of Sciences found that the results of existing studies were sensitive to the specification and time period examined, and concluded that a causal link between right-to-carry laws and crime rates cannot be shown. Quinnipiac University economist Mark Gius summarized literature published between 1993 and 2005, and found that ten papers suggested that permissive CCW laws reduce crime, one paper suggested they increase crime, and nine papers showed no definitive results. A 2017 review of the existing literature concluded, "Given the most recent evidence, we conclude with considerable confidence that deregulation of gun carrying over the last four decades has undermined public safety—which is to say that restricting concealed carry is one gun regulation that appears to be effective." A 2016 study in the European Economic Review which examined the conflicting claims in the existing literature concluded that the evidence CCW either increases or decreases crime on average "seems weak"; the study's model found "some support to the law having a negative (but with a positive trend) effect on property crimes, and a small but positive (and increasing) effect on violent crimes". The Washington Post fact-checker concluded that it could not state that CCW laws reduced crime, as the evidence was murky and in dispute. In a 2017 article in the journal Science, Stanford University law professor John Donohue and Duke University economist Philip J. Cook write that "there is an emerging consensus that, on balance, the causal effect of deregulating concealed carry (by replacing a restrictive law with an RTC law) has been to increase violent crime". Donohue and Cook argue that the crack epidemic made it difficult to determine the causal effects of CCW laws and that this made earlier results inconclusive; recent research does not suffer the same challenges with causality. A 2018 RAND review of the literature concluded that concealed carry either has no impact on crime or that it may increase violent crime. The review said, "We found no qualifying studies showing that concealed-carry laws decreased [violent crime]." A 2020 study in PNAS found that right-to-carry laws were associated with higher firearm deaths. A 2019 panel study published in the Journal of General Internal Medicine by medical researchers including Michael Siegel of the Boston University School of Public Health and David Hemenway of the Harvard T.H. Chan School of Public Health found that “shall issue" concealed carry laws were associated with a 9% increase in homicides. A 2019 study in the American Journal of Public Health found that greater restrictions on concealed carry laws were associated with decreases in workplace homicide rates. Another 2019 study in the American Journal of Public Health found that states with right-to-carry laws were associated with a 29% higher rate of firearm workplace homicides. A 2019 study in the Journal of Empirical Legal Studies found that right-to-carry laws led to an increase in overall violent crime. A 2017 study in the American Journal of Public Health found that "shall-issue laws" (where concealed carry permits must be given if criteria are met) "are associated with significantly higher rates of total, firearm-related, and handgun-related homicide" than "may-issue laws" (where local law enforcement have discretion over who can get a concealed carry permit). A 2011 study found that aggravated assaults increase when concealed carry laws are adopted.A 2019 study in Journal of American College of Surgeons found "no statistically significant association between the liberalization of state level firearm carry legislation over the last 30 years and the rates of homicides or other violent crime." This is also in line with a 1997 study researching county level data from 1977 to 1992 concluding that allowing citizens to carry concealed weapons deters violent crimes and it appears to produce no increase in accidental deaths. A 2018 study in The Review of Economics and Statistics found that the impact of right-to-carry laws was mixed and changed over time. RTC laws increased some crimes over some periods while decreasing other crimes over other periods. The study suggested that conclusions drawn in other studies are highly dependent on the time periods that are studied, the types of models that are adopted and the assumptions that are made. A 2015 study that looked at issuance rates of concealed-carry permits and changes in violent crime by county-level in four shall-issue states found no increases or decreases in violent crime rates with changes in permit issuances. A 2019 study in the International Review of Law and Economics found that with one method, right-to-carry laws had no impact on violent crime, but with another method led to an increase in violent crime; neither method showed that right-to-carry laws led to a reduction in crime. A 2003 study found no significant changes in violent crime rates amongst 58 Florida counties with increases of concealed-carry permits. A 2004 study found no significant association between homicide rates and shall-issue concealed carry laws.A 2013 study of eight years of Texas data found that concealed handgun licensees were much less likely to be convicted of crimes than were nonlicensees. The same study found that licensees' convictions were more likely to be for less common crimes, "such as sexual offenses, gun offenses, or offenses involving a death." A 2020 study in Applied Economics Letters examining concealed-carry permits per capita by state found a significant negative effect on violent crime rates. A 2016 study found a significant negative effect on violent crime rates with passage of shall-issue laws. A 2017 study in Applied Economics Letters found that property crime decreased in Chicago after the implementation of the shall issue concealed carry law. A 2014 Applied Economics Letters study found states with more permissive conceal carry laws had lower murder rates than states with restrictive laws. Another 2014 study found that RTC laws by state significantly reduce homicide rates.In 1996 economists John R. Lott, Jr. and David B. Mustard analyzed crime data in all 3,054 counties in the United States from 1977 to 1992, finding counties that had shall-issue licensing laws overall saw murders decrease by 7.65 percent, rapes decrease by 5.2 percent, aggravated assaults decrease by 7 percent and robberies decrease by 2.2 percent. The study was widely disputed by numerous economists. The 2004 National Academy of Sciences panel reviewing the research on the subject concluded, with one dissenting panelist, that the Lott and Mustard study was unreliable. Georgetown University Professor Jens Ludwig, Daniel Nagin of Carnegie Mellon University and Dan A. Black of the University of Chicago in The Journal of Legal Studies, said of the Lott-Mustard study, "once Florida is removed from the sample, there is no longer any detectable impact of right-to-carry laws on the rates of murder and rape".A 2022 study examining the Sullivan Act of 1911 found that the law had no impact on overall homicide rates, reduced overall suicide rates, and caused large and sustained decrease in gun-related suicide rates. Effect on crime and deaths: Firearms permit holders in active shooter incidents In 2016 FBI analyzed 40 "active shooter incidents" in 2014 and 2015 where bystanders were put in peril in on-going incidents that could be affected by police or citizen response. Six incidents were successfully ended when citizens intervened. In two stops citizens restrained the shooters, one unarmed, one with pepper spray. In two stops at schools, the shooters were confronted by teachers: one shooter disarmed, one committed suicide. In two stops citizens with firearms permits exchanged gunfire with the shooter. In a failed stop attempt, a citizen with a firearms permit was killed by the shooter. In 2018 the FBI analyzed 50 active shooter incidents in 2016 and 2017. This report focused on policies to neutralize active shooters to save lives. In 10 incidents citizens confronted an active shooter. In eight incidents the citizens stopped the shooter. Four stops involved unarmed citizens who confronted and restrained or blocked the shooter or talked the shooter into surrender. Four stops involved citizens with firearms permits: two exchanged gunfire with a shooter and two detained the shooter at gunpoint for arrest by responding police. Of the two failed stops, one involved a permit holder who exchanged gunfire with the shooter but the shooter fled and continued shooting and the other involved a permit holder who was wounded by the shooter. "Armed and unarmed citizens engaged the shooter in 10 incidents. They safely and successfully ended the shootings in eight of those incidents. Their selfless actions likely saved many lives."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pharyngealization** Pharyngealization: Pharyngealization is a secondary articulation of consonants or vowels by which the pharynx or epiglottis is constricted during the articulation of the sound. IPA symbols: In the International Phonetic Alphabet, pharyngealization can be indicated by one of two methods: A tilde or swung dash (IPA Number 428) is written through the base letter (typographic overstrike). It is the older and more generic symbol. It indicates velarization, uvularization or pharyngealization, as in [ᵶ], the guttural equivalent of [z]. The symbol ⟨ˤ⟩ (IPA Number 423) – a superscript variant of ⟨ʕ⟩, the voiced pharyngeal approximant – is written after the base letter. It indicates specifically a pharyngealized consonant, as in [tˤ], a pharyngealized [t]. Computing codes: Since Unicode 1.1, there have been two similar superscript characters: IPA ⟨ˤ⟩ (U+02E4 modifier letter small reversed glottal stop) and Semiticist ⟨ˁ⟩ (U+02C1 modifier letter reversed glottal stop). U+02E4 is formally a superscript ⟨ʕ⟩ (U+0295 LATIN LETTER PHARYNGEAL VOICED FRICATIVE, = reversed glottal stop), and in the Unicode charts looks like a simple superscript ⟨ʕ⟩, though in some fonts it looks like a superscript reversed lower-case letter glottal stop ⟨ɂ⟩. U+02C1 is a typographic alternative to ⟨ʿ⟩ (U+02BF MODIFIER LETTER LEFT HALF RING); which is used to transliterate the Semitic consonant ayin and which = reversed ⟨ʾ⟩, which itself transliterates the glottal Semitic consonants aleph and hamza. In the Unicode charts U+02C1 looks like a reversed ⟨ˀ⟩ (U+02C0 MODIFIER LETTER GLOTTAL STOP), which is used in the IPA for glottalization. There is no parallel Unicode distinction for modifier glottal stop. The IPA Handbook lists U+02E4 as the Unicode equivalent of IPA Number 423, the dedicated IPA symbol for pharyngealization. The superimposed tilde is assigned Unicode character U+0334. This was originally intended to combine with other letters to represent pharyngealization. However, that usage is now deprecated (though still functional), and several precomposed letters have been adopted to replace it. These are the labial consonants ⟨ᵱ ᵬ ᵮ ᵯ⟩ and the coronal consonants ⟨ᵵ ᵭ ᵴ ᵶ ᵰ ᵲ ᵳ ɫ⟩. Usage: Ubykh, an extinct Northwest Caucasian language spoken in Russia and Turkey, used pharyngealization in 14 pharyngealized consonants. Chilcotin has pharyngealized consonants that trigger pharyngealization of vowels. Many languages (such as Salishan, Sahaptian) in the Plateau culture area of North America also have pharyngealization processes that are triggered by pharyngeal or pharyngealized consonants, which affect vowels. The Tuu/“Khoisan” language Taa (or !Xóõ) has pharyngealized vowels that contrast phonemically with voiced, breathy and epiglottalized vowels. That feature is represented in the orthography by a tilde under the respective pharyngealized vowel. In Tuu languages, epiglottalized vowels are phonemic. For many languages, pharyngealization is generally associated with more dental articulations of coronal consonants. Dark l tends to be dental or denti-alveolar, but clear l tends to be retracted to an alveolar position.Arabic and Syriac use secondary uvularization, which is generally not distinguished from pharyngealization, for the "emphatic" coronal consonants. Usage: Examples of pharyngealized consonants (Uvularized consonants are not distinguished.) Stops pharyngealized voiceless alveolar stop [tˤ] (in Chechen, Berber, Arabic, Kurmanji, Mizrahi and Classical Hebrew) pharyngealized voiced alveolar stop [dˤ] (in Chechen, Tamazight and Arabic) pharyngealized voiceless bilabial stop [pˤ] (in Kurmanji, Chechen and Ubykh) pharyngealized voiced bilabial stop [bˤ] (in Chechen, Ubykh, Siwa, Shihhi Arabic and Iraqi Arabic, allophonic in Adyghe and Kabardian) pharyngealized voiceless uvular stop [qˤ] (in Ubykh, Tsakhur, and Archi) pharyngealized voiced uvular stop [ɢˤ] (in Tsakhur) pharyngealized glottal stop [ʔˤ] (in Shihhi Arabic; allophonic in Chechen) pharyngealized voiceless velar plosive [kˤ] (in Kurmanji) Fricatives pharyngealized voiceless alveolar sibilant [sˤ] (in Chechen, Kurmanji, Arabic, Classical Hebrew and Northern Berber) pharyngealized voiced alveolar sibilant [zˤ] (in Chechen, Berber, Arabic and Kurmanji) pharyngealized voiceless postalveolar fricative [ʃˤ] (in Chechen; also a hypercorrection of the Modern Hebrew [t͡ʃ]) pharyngealized voiced postalveolar fricative [ʒˤ] (in Chechen) pharyngealized voiceless dental fricative [θˤ] (as [θ̬ˤ], a variant pronunciation in Mehri) pharyngealized voiced dental fricative [ðˤ] (in Arabic: ظ) pharyngealized voiced alveolar lateral fricative [ɮˤ] (in Classical Arabic) pharyngealized voiceless labiodental fricative [fˤ] pharyngealized voiced labiodental fricative [vˤ] (in Ubykh) pharyngealized voiceless uvular fricative [χˤ] (in Ubykh, Tsakhur, Archi and Bzyb Abkhaz) pharyngealized voiced uvular fricative [ʁˤ] (in Ubykh, Tsakhur and Archi) pharyngealized voiceless glottal fricative [hˤ] (in Tsakhur) Affricates pharyngealized voiceless alveolar affricate [tsˤ] (in Chechen) pharyngealized voiced alveolar affricate [dzˤ] (in Chechen) pharyngealized voiceless postalveolar affricate [tʃˤ] (in Chechen and Kurmanji) pharyngealized voiced postalveolar affricate [dʒˤ] (in Chechen) Trills pharyngealized voiced alveolar trill [rˁ] (in Chechen and Siwa) Nasals pharyngealized bilabial nasal [mˤ] (in Chechen, Ubykh, Moroccan Darija, and Iraqi Arabic) pharyngealized alveolar nasal [nˤ] (in Chechen) Approximants pharyngealized labialized velar approximant [wˤ] (in Shihhi Arabic, Chechen and Ubykh) pharyngealized alveolar lateral approximant [lˤ] (in Chechen, and Northern Standard Dutch) pharyngealized labialized postalveolar approximant [ɹ̠ˤʷ] (an variant in American English) pharyngealized velar approximant [ɹ̈ˤ], with the body of the tongue bunched up at the velum (in some dialects of American English and Dutch) Examples of pharyngealized vowels pharyngealized open-mid back rounded vowel [ɔˤ] (in Northern Standard Dutch) pharyngealized vowels in the Air Tamajeq language
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Correlation function (statistical mechanics)** Correlation function (statistical mechanics): In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variables, such as spin and density, at different positions are related. More specifically, correlation functions quantify how microscopic variables co-vary with one another on average across space and time. A classic example of such spatial correlations is in ferro- and antiferromagnetic materials, where the spins prefer to align parallel and antiparallel with their nearest neighbors, respectively. The spatial correlation between spins in such materials is shown in the figure to the right. Definitions: The most common definition of a correlation function is the canonical ensemble (thermal) average of the scalar product of two random variables, s1 and s2 , at positions R and R+r and times t and t+τ Here the brackets, ⟨⋅⟩ , indicate the above-mentioned thermal average. It is a matter of convention whether one subtracts the uncorrelated average product of s1 and s2 , ⟨s1(R,t)⟩⟨s2(R+r,t+τ)⟩ from the correlated product, ⟨s1(R,t)⋅s2(R+r,t+τ)⟩ , with the convention differing among fields. The most common uses of correlation functions are when s1 and s2 describe the same variable, such as a spin-spin correlation function, or a particle position-position correlation function in an elemental liquid or a solid (often called a Radial distribution function or a pair correlation function). Correlation functions between the same random variable are autocorrelation functions. However, in statistical mechanics, not all correlation functions are autocorrelation functions. For example, in multicomponent condensed phases, the pair correlation function between different elements is often of interest. Such mixed-element pair correlation functions are an example of cross-correlation functions, as the random variables s1 and s2 represent the average variations in density as a function position for two distinct elements. Definitions: Equilibrium equal-time (spatial) correlation functions Often, one is interested in solely the spatial influence of a given random variable, say the direction of a spin, on its local environment, without considering later times, τ . In this case, we neglect the time evolution of the system, so the above definition is re-written with τ=0 . This defines the equal-time correlation function, C(r,0) . It is written as: Often, one omits the reference time, t , and reference radius, R , by assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sample positions, yielding: where, again, the choice of whether to subtract the uncorrelated variables differs among fields. The Radial distribution function is an example of an equal-time correlation function where the uncorrelated reference is generally not subtracted. Other equal-time spin-spin correlation functions are shown on this page for a variety of materials and conditions. Definitions: Equilibrium equal-position (temporal) correlation functions One might also be interested in the temporal evolution of microscopic variables. In other words, how the value of a microscopic variable at a given position and time, R and t , influences the value of the same microscopic variable at a later time, t+τ (and usually at the same position). Such temporal correlations are quantified via equal-position correlation functions, C(0,τ) . They are defined analogously to above equal-time correlation functions, but we now neglect spatial dependencies by setting r=0 , yielding: Assuming equilibrium (and thus time invariance of the ensemble) and averaging over all sites in the sample gives a simpler expression for the equal-position correlation function as for the equal-time correlation function: The above assumption may seem non-intuitive at first: how can an ensemble which is time-invariant have a non-uniform temporal correlation function? Temporal correlations remain relevant to talk about in equilibrium systems because a time-invariant, macroscopic ensemble can still have non-trivial temporal dynamics microscopically. One example is in diffusion. A single-phase system at equilibrium has a homogeneous composition macroscopically. However, if one watches the microscopic movement of each atom, fluctuations in composition are constantly occurring due to the quasi-random walks taken by the individual atoms. Statistical mechanics allows one to make insightful statements about the temporal behavior of such fluctuations of equilibrium systems. This is discussed below in the section on the temporal evolution of correlation functions and Onsager's regression hypothesis. Definitions: Generalization beyond equilibrium correlation functions All of the above correlation functions have been defined in the context of equilibrium statistical mechanics. However, it is possible to define correlation functions for systems away from equilibrium. Examining the general definition of C(r,τ) , it is clear that one can define the random variables used in these correlation functions, such as atomic positions and spins, away from equilibrium. As such, their scalar product is well-defined away from equilibrium. The operation which is no longer well-defined away from equilibrium is the average over the equilibrium ensemble. This averaging process for non-equilibrium system is typically replaced by averaging the scalar product across the entire sample. This is typical in scattering experiments and computer simulations, and is often used to measure the radial distribution functions of glasses. Definitions: One can also define averages over states for systems perturbed slightly from equilibrium. See, for example, http://xbeams.chem.yale.edu/~batista/vaa/node56.html Measuring correlation functions: Correlation functions are typically measured with scattering experiments. For example, x-ray scattering experiments directly measure electron-electron equal-time correlations. From knowledge of elemental structure factors, one can also measure elemental pair correlation functions. See Radial distribution function for further information. Equal-time spin–spin correlation functions are measured with neutron scattering as opposed to x-ray scattering. Neutron scattering can also yield information on pair correlations as well. For systems composed of particles larger than about one micrometer, optical microscopy can be used to measure both equal-time and equal-position correlation functions. Optical microscopy is thus common for colloidal suspensions, especially in two dimensions. Time evolution of correlation functions: In 1931, Lars Onsager proposed that the regression of microscopic thermal fluctuations at equilibrium follows the macroscopic law of relaxation of small non-equilibrium disturbances. This is known as the Onsager regression hypothesis. As the values of microscopic variables separated by large timescales, τ , should be uncorrelated beyond what we would expect from thermodynamic equilibrium, the evolution in time of a correlation function can be viewed from a physical standpoint as the system gradually 'forgetting' the initial conditions placed upon it via the specification of some microscopic variable. There is actually an intuitive connection between the time evolution of correlation functions and the time evolution of macroscopic systems: on average, the correlation function evolves in time in the same manner as if a system was prepared in the conditions specified by the correlation function's initial value and allowed to evolve.Equilibrium fluctuations of the system can be related to its response to external perturbations via the Fluctuation-dissipation theorem. The connection between phase transitions and correlation functions: Continuous phase transitions, such as order-disorder transitions in metallic alloys and ferromagnetic-paramagnetic transitions, involve a transition from an ordered to a disordered state. In terms of correlation functions, the equal-time correlation function is non-zero for all lattice points below the critical temperature, and is non-negligible for only a fairly small radius above the critical temperature. As the phase transition is continuous, the length over which the microscopic variables are correlated, ξ , must transition continuously from being infinite to finite when the material is heated through its critical temperature. This gives rise to a power-law dependence of the correlation function as a function of distance at the critical point. This is shown in the figure in the left for the case of a ferromagnetic material, with the quantitative details listed in the section on magnetism. Applications: Magnetism In a spin system, the equal-time correlation function is especially well-studied. It describes the canonical ensemble (thermal) average of the scalar product of the spins at two lattice points over all possible orderings: C(r)=⟨s(R)⋅s(R+r)⟩−⟨s(R)⟩⟨s(R+r)⟩. Here the brackets mean the above-mentioned thermal average. Schematic plots of this function are shown for a ferromagnetic material below, at, and above its Curie temperature on the left. Even in a magnetically disordered phase, spins at different positions are correlated, i.e., if the distance r is very small (compared to some length scale ξ ), the interaction between the spins will cause them to be correlated. Applications: The alignment that would naturally arise as a result of the interaction between spins is destroyed by thermal effects. At high temperatures exponentially-decaying correlations are observed with increasing distance, with the correlation function being given asymptotically by exp ⁡(−rd), where r is the distance between spins, and d is the dimension of the system, and ϑ is an exponent, whose value depends on whether the system is in the disordered phase (i.e. above the critical point), or in the ordered phase (i.e. below the critical point). At high temperatures, the correlation decays to zero exponentially with the distance between the spins. The same exponential decay as a function of radial distance is also observed below Tc , but with the limit at large distances being the mean magnetization ⟨M2⟩ . Precisely at the critical point, an algebraic behavior is seen C(r)≈1r(d−2+η), where η is a critical exponent, which does not have any simple relation with the non-critical exponent ϑ introduced above. For example, the exact solution of the two-dimensional Ising model (with short-ranged ferromagnetic interactions) gives precisely at criticality η=14 , but above criticality ϑ=12 and below criticality ϑ=2 .As the temperature is lowered, thermal disordering is lowered, and in a continuous phase transition the correlation length diverges, as the correlation length must transition continuously from a finite value above the phase transition, to infinite below the phase transition: ξ∝|T−Tc|−ν, with another critical exponent ν This power law correlation is responsible for the scaling, seen in these transitions. All exponents mentioned are independent of temperature. Applications: They are in fact universal, i.e. found to be the same in a wide variety of systems. Applications: Radial distribution functions One common correlation function is the radial distribution function which is seen often in statistical mechanics and fluid mechanics. The correlation function can be calculated in exactly solvable models (one-dimensional Bose gas, spin chains, Hubbard model) by means of Quantum inverse scattering method and Bethe ansatz. In an isotropic XY model, time and temperature correlations were evaluated by Its, Korepin, Izergin & Slavnov. Applications: Higher order correlation functions Higher-order correlation functions involve multiple reference points, and are defined through a generalization of the above correlation function by taking the expected value of the product of more than two random variables: Ci1i2⋯in(s1,s2,⋯,sn)=⟨Xi1(s1)Xi2(s2)⋯Xin(sn)⟩. However, such higher order correlation functions are relatively difficult to interpret and measure. For example, in order to measure the higher-order analogues of pair distribution functions, coherent x-ray sources are needed. Both the theory of such analysis and the experimental measurement of the needed X-ray cross-correlation functions are areas of active research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calán/Tololo Survey** Calán/Tololo Survey: The Calán/Tololo Supernova Survey was a supernova survey that ran from 1989 to 1995 at the University of Chile and the Cerro Tololo Inter-American Observatory to measure a Hubble diagram out to redshifts of 0.1. It was founded by Mario Hamuy, José Maza Sancho, Mark M. Phillips, and Nicholas B. Suntzeff in 1989 out of discussions at the UC Santa Cruz meeting on supernovae on how to improve the Hubble diagram using Type Ia supernovae. It was also motivated by the suggestion of Allan Sandage to restart a supernova survey after the Sandage and Tammann survey failed due to poor quality photographic plates in 1986. The Survey built on the original supernova survey of Maza done at the f/3 Maksutov Camera at the Cerro Roble Observatory of the University of Chile between 1979 and 1984. The Survey used the CTIO Curtis Schmidt telescope with IIa-O photographic plates, each plate covering a field of 25 sq-deg on the sky. The plates were developed and sent to Santiago Chile the next morning and searched for supernovae at the Department of Astronomy at the University of Chile. Any supernova candidates were then observed the next night using the 0.9m telescope at CTIO with a CCD camera. This was one of the first studies done in astronomy where the telescope time was scheduled to observe objects not yet discovered. Calán/Tololo Survey: The survey discovered 50 supernovae between 1990 and 1993, of which 32 were Type Ia supernovae. The survey provided a uniform photometric and spectroscopic dataset of all classes of supernovae, which led to the discovery of a method of using Type Ia supernovae as standard candles, the Phillips relationship, as well as providing data for a Hubble diagram of Type II supernovae using the Expanding Photosphere method. In 1994, the Calán/Tololo team formed a parallel project, the High-Z Supernova Search Team, organized by Nicholas Suntzeff and Brian Schmidt, which later discovered the accelerated expansion of the universe in 1998. The calibration of Type Ia supernovae as standard candles led to the precise measurements of the Hubble Constant H0 and the deceleration parameter q0, the latter indicating the presence of a dark energy or cosmological constant dominating the mass/energy of the Universe. The Calán/Tololo data of nearby Type Ia supernovae were used as the anchors for the Hubble flow measurements both by the High-Z Supernova Search Team and the Supernova Cosmology Project. Their pioneering work was cited in the award of the 2011 Nobel Prize in Physics
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nanoparticles for drug delivery to the brain** Nanoparticles for drug delivery to the brain: Nanoparticles for drug delivery to the brain is a method for transporting drug molecules across the blood–brain barrier (BBB) using nanoparticles. These drugs cross the BBB and deliver pharmaceuticals to the brain for therapeutic treatment of neurological disorders. These disorders include Parkinson's disease, Alzheimer's disease, schizophrenia, depression, and brain tumors. Part of the difficulty in finding cures for these central nervous system (CNS) disorders is that there is yet no truly efficient delivery method for drugs to cross the BBB. Antibiotics, antineoplastic agents, and a variety of CNS-active drugs, especially neuropeptides, are a few examples of molecules that cannot pass the BBB alone. With the aid of nanoparticle delivery systems, however, studies have shown that some drugs can now cross the BBB, and even exhibit lower toxicity and decrease adverse effects throughout the body. Toxicity is an important concept for pharmacology because high toxicity levels in the body could be detrimental to the patient by affecting other organs and disrupting their function. Further, the BBB is not the only physiological barrier for drug delivery to the brain. Other biological factors influence how drugs are transported throughout the body and how they target specific locations for action. Some of these pathophysiological factors include blood flow alterations, edema and increased intracranial pressure, metabolic perturbations, and altered gene expression and protein synthesis. Though there exist many obstacles that make developing a robust delivery system difficult, nanoparticles provide a promising mechanism for drug transport to the CNS. Background: The first successful delivery of a drug across the BBB occurred in 1995. The drug used was hexapeptide dalargin, an anti-nociceptive peptide that cannot cross the BBB alone. It was encapsulated in polysorbate 80 coated nanoparticles and intravenously injected. This was a huge breakthrough in the nanoparticle drug delivery field, and it helped advance research and development toward clinical trials of nanoparticle delivery systems. Nanoparticles range in size from 10 - 1000 nm (or 1 µm) and they can be made from natural or artificial polymers, lipids, dendrimers, and micelles. Most polymers used for nanoparticle drug delivery systems are natural, biocompatible, and biodegradable, which helps prevent contamination in the CNS. Several current methods for drug delivery to the brain include the use of liposomes, prodrugs, and carrier-mediated transporters. Many different delivery methods exist to transport these drugs into the body, such as peroral, intranasal, intravenous, and intracranial. For nanoparticles, most studies have shown increasing progression with intravenous delivery. Along with delivery and transport methods, there are several means of functionalizing, or activating, the nanoparticle carriers. These means include dissolving or absorbing a drug throughout the nanoparticle, encapsulating a drug inside the particle, or attaching a drug on the surface of the particle. Types of nanoparticles for CNS drug delivery: Lipid-based One type of nanoparticle involves use of liposomes as drug molecule carriers. The diagram on the right shows a standard liposome. It has a phospholipid bilayer separating the interior from the exterior of the cell. Types of nanoparticles for CNS drug delivery: Liposomes are composed of vesicular bilayers, lamellae, made of biocompatible and biodegradable lipids such as sphingomyelin, phosphatidylcholine, and glycerophospholipids. Cholesterol, a type of lipid, is also often incorporated in the lipid-nanoparticle formulation. Cholesterol can increase stability of a liposome and prevent leakage of a bilayer because its hydroxyl group can interact with the polar heads of the bilayer phospholipids. Liposomes have the potential to protect the drug from degradation, target sites for action, and reduce toxicity and adverse effects. Lipid nanoparticles can be manufactured by high pressure homogenization, a current method used to produce parenteral emulsions. This process can ultimately form a uniform dispersion of small droplets in a fluid substance by subdividing particles until the desired consistency is acquired. This manufacturing process is already scaled and in use in the food industry, which therefore makes it more appealing for researchers and for the drug delivery industry. Types of nanoparticles for CNS drug delivery: Liposomes can also be functionalized by attaching various ligands on the surface to enhance brain-targeted delivery. Types of nanoparticles for CNS drug delivery: Cationic liposomes Another type of lipid-nanoparticle that can be used for drug delivery to the brain is a cationic liposome. These are lipid molecules that are positively charged. One example of cationic liposomes uses bolaamphiphiles, which contain hydrophilic groups surrounding a hydrophobic chain to strengthen the boundary of the nano-vesicle containing the drug. Bolaamphiphile nano-vesicles can cross the BBB, and they allow controlled release of the drug to target sites. Lipoplexes can also be formed from cationic liposomes and DNA solutions, to yield transfection agents. Cationic liposomes cross the BBB through adsorption mediated endocytosis followed by internalization in the endosomes of the endothelial cells. By transfection of endothelial cells through the use of lipoplexes, physical alterations in the cells could be made. These physical changes could potentially improve how some nanoparticle drug-carriers cross the BBB. Types of nanoparticles for CNS drug delivery: Metallic Metal nanoparticles are promising as carriers for drug delivery to the brain. Common metals used for nanoparticle drug delivery are gold, silver, and platinum, owing to their biocompatibility. These metallic nanoparticles are used due to their large surface area to volume ratio, geometric and chemical tunability, and endogenous antimicrobial properties. Silver cations released from silver nanoparticles can bind to the negatively charged cellular membrane of bacteria and increase membrane permeability, allowing foreign chemicals to enter the intracellular fluid. Types of nanoparticles for CNS drug delivery: Metal nanoparticles are chemically synthesized using reduction reactions. For example, drug-conjugated silver nanoparticles are created by reducing silver nitrate with sodium borohydride in the presence of an ionic drug compound. The drug binds to the surface of the silver, stabilizing the nanoparticles and preventing the nanoparticles from aggregation.Metallic nanoparticles typically cross the BBB via transcytosis. Nanoparticle delivery through the BBB can be increased by introducing peptide conjugates to improve permeability to the central nervous system. For instance, recent studies have shown an improvement in gold nanoparticle delivery efficiency by conjugating a peptide that binds to the transferrin receptors expressed in brain endothelial cells. Types of nanoparticles for CNS drug delivery: Solid lipid Also, solid lipid nanoparticles (SLNs) are lipid nanoparticles with a solid interior as shown in the diagram on the right. SLNs can be made by replacing the liquid lipid oil used in the emulsion process with a solid lipid. In solid lipid nanoparticles, the drug molecules are dissolved in the particle's solid hydrophobic lipid core, this is called the drug payload, and it is surrounded by an aqueous solution. Many SLNs are developed from triglycerides, fatty acids, and waxes. High-pressure homogenization or micro-emulsification can be used for manufacturing. Further, functionalizing the surface of solid lipid nanoparticles with polyethylene glycol (PEG) can result in increased BBB permeability. Different colloidal carriers such as liposomes, polymeric nanoparticles, and emulsions have reduced stability, shelf life and encapsulation efficacy. Solid lipid nanoparticles are designed to overcome these shortcomings and have an excellent drug release and physical stability apart from targeted delivery of drugs. Types of nanoparticles for CNS drug delivery: Nanoemulsions Another form for nanoparticle delivery systems is oil-in-water emulsions done on a nano-scale. This process uses common biocompatible oils such as triglycerides and fatty acids, and combines them with water and surface-coating surfactants. Oils rich in omega-3 fatty acids especially contain important factors that aid in penetrating the tight junctions of the BBB. Types of nanoparticles for CNS drug delivery: Polymer-based Other nanoparticles are polymer-based, meaning they are made from a natural polymer such as polylactic acid (PLA), poly D,L-glycolide (PLG), polylactide-co-glycolide (PLGA), and polycyanoacrylate (PCA). Some studies have found that polymeric nanoparticles may provide better results for drug delivery relative to lipid-based nanoparticles because they may increase the stability of the drugs or proteins being transported. Polymeric nanoparticles may also contain beneficial controlled release mechanisms. Types of nanoparticles for CNS drug delivery: Nanoparticles made from natural polymers that are biodegradable have the abilities to target specific organs and tissues in the body, to carry DNA for gene therapy, and to deliver larger molecules such as proteins, peptides, and even genes. To manufacture these polymeric nanoparticles, the drug molecules are first dissolved and then encapsulated or attached to a polymer nanoparticle matrix. Three different structures can then be obtained from this process; nanoparticles, nanocapsules (in which the drug is encapsulated and surrounded by the polymer matrix), and nanospheres (in which the drug is dispersed throughout the polymeric matrix in a spherical form).One of the most important traits for nanoparticle delivery systems is that they must be biodegradable on the scale of a few days. A few common polymer materials used for drug delivery studies are polybutyl cyanoacrylate (PBCA), poly(isohexyl cyanoacrylate) (PIHCA), polylactic acid (PLA), or polylactide-co-glycolide (PLGA). PBCA undergoes degradation through enzymatic cleavage of its ester bond on the alkyl side chain to produce water-soluble byproducts. PBCA also proves to be the fastest biodegradable material, with studies showing 80% reduction after 24 hours post intravenous therapy injection. PIHCA, however, was recently found to display an even lower degradation rate, which in turn further decreases toxicity. PIHCA, due to this slight advantage, is currently undergoing phase III clinical trials for transporting the drug doxorubicin as a treatment for hepatocellular carcinomas. Types of nanoparticles for CNS drug delivery: Human serum albumin (HSA) and chitosan are also materials of interest for the generation of nanoparticle delivery systems. Using albumin nanoparticles for stroke therapy can overcome numerous limitations. For instance, albumin nanoparticles can enhance BBB permeability, increase solubility, and increase half-life in circulation. Patients who have brain cancer overexpress albumin-binding proteins, such as SPARC and gp60, in their BBB and tumor cells, naturally increasing the uptake of albumin into the brain. Using this relationship, researches have formed albumin nanoparticles that co-encapsulate two anticancer drugs, paclitaxel and fenretinide, modified with low weight molecular protamine (LMWP), a type of cell-penetrating protein, for anti-glioma therapy. Once injected into the patient's body, the albumin nanoparticles can cross the BBB more easily, bind to the proteins and penetrate glioma cells, and then release the contained drugs. This nanoparticle formulation enhances tumor-targeting delivery efficiency and improves the solubility issue of hydrophobic drugs. Specifically, cationic bovine serum albumin-conjugated tanshinone IIA PEGylated nanoparticles injected into a MCAO rat model decreased the volume of infarction and neuronal apoptosis. Chitosan, a naturally abundant polysaccharide, is particularly useful due to its biocompability and lack of toxicity. With its adsorptive and mucoadhesive properties, chitosan can overcome limitations of internasal administration to the brain. It has been shown that cationic chitosan nanoparticles interact with the negatively charged brain endothelium.Coating these polymeric nanoparticle devices with different surfactants can also aid BBB crossing and uptake in the brain. Surfactants such as polysorbate 80, 20, 40, 60, and poloxamer 188, demonstrated positive drug delivery through the blood–brain barrier, whereas other surfactants did not yield the same results. It has also been shown that functionalizing the surface of nanoparticles with polyethylene glycol (PEG), can induce the "stealth effect", allowing the drug-loaded nanoparticle to circulate throughout the body for prolonged periods of time. Further, the stealth effect, caused in part by the hydrophilic and flexible properties of the PEG chains, facilitates an increase in localizing the drug at target sites in tissues and organs. Mechanisms for delivery: Liposomes A mechanism for liposome transport across the BBB is lipid-mediated free diffusion, a type of facilitated diffusion, or lipid-mediated endocytosis. There exist many lipoprotein receptors which bind lipoproteins to form complexes that in turn transport the liposome nano-delivery system across the BBB. Apolipoprotein E (apoE) is a protein that facilitates transport of lipids and cholesterol. ApoE constituents bind to nanoparticles, and then this complex binds to a low-density lipoprotein receptor (LDLR) in the BBB and allows transport to occur. Mechanisms for delivery: Polymeric nanoparticles The mechanism for the transport of polymer-based nanoparticles across the BBB has been characterized as receptor-mediated endocytosis by the brain capillary endothelial cells. Transcytosis then occurs to transport the nanoparticles across the tight junction of endothelial cells and into the brain. Surface coating nanoparticles with surfactants such as polysorbate 80 or poloxamer 188 was shown to increase uptake of the drug into the brain also. This mechanism also relies on certain receptors located on the luminal surface of endothelial cells of the BBB. Ligands coated on the nanoparticle's surface bind to specific receptors to cause a conformational change. Once bound to these receptors, transcytosis can commence, and this involves the formation of vesicles from the plasma membrane pinching off the nanoparticle system after internalization. Mechanisms for delivery: Additional receptors identified for receptor-mediated endocytosis of nanoparticle delivery systems are the scavenger receptor class B type I (SR-BI), LDL receptor (LRP1), transferrin receptor, and insulin receptor. As long as a receptor exists on the endothelial surface of the BBB, any ligand can be attached to the nanoparticle's surface to functionalize it so that it can bind and undergo endocytosis. Mechanisms for delivery: Another mechanism is adsorption mediated transcytosis, where electrostatic interactions are involved in mediating nanoparticle crossing of the BBB. Cationic nanoparticles (including cationic liposomes) are of interest for this mechanism, because their positive charges assist binding on the brain's endothelial cells. Using TAT-peptides, a cell-penetrating peptide, to functionalize the surface of cationic nanoparticles can further improve drug transport into the brain. Mechanisms for delivery: Magnetic and Magnetoelectric nanoparticles In contrast to the above mechanisms, a delivery with magnetic fields does not strongly depend on the biochemistry of the brain. In this case, nanoparticles are literally pulled across the BBB via application of a magnetic field gradient. The nanoparticles can be pulled in as well as removed from the brain merely by controlling the direction of the gradient. For the approach to work, the nanoparticles must have a non-zero magnetic moment and have a diameter of less than 50 nm. Both magnetic and magnetoelectric nanoparticles (MENs) satisfy the requirements. However, it is only the MENs which display a non-zero magnetoelectric (ME) effect. Due to the ME effect, MENs can provide a direct access to local intrinsic electric fields at the nanoscale to enable a two-way communication with the neural network at the single-neuron level. MENs, proposed by the research group of Professor Sakhrat Khizroev at Florida International University (FIU), have been used for targeted drug delivery and externally controlled release across the BBB to treat HIV and brain tumors, as well as to wirelessly stimulate neurons deep in the brain for treatment of neurodegenerative diseases such as Parkinson's Disease and others. Mechanisms for delivery: Focused ultrasound Studies have shown that focused ultrasound bursts can noninvasively be used to disrupt tight junctions in desired locations of BBB, allowing for the increased passage of particles at that location. This disruption can last up to four hours after burst administration. Focused ultrasound works by generating oscillating microbubbles, which physically interact with the cells of the BBB by oscillating at a frequency which can be tuned by the ultrasound burst. This physical interaction is believed to cause cavitation and ultimately the disintegration of the tight junction complexes which may explain why this effect lasts for several hours. However, the energy applied from ultrasound can result in tissue damage. Fortunately, studies have demonstrated that this risk can be reduced if preformed microbubbles are first injected before focused ultrasound is applied, reducing the energy required from the ultrasound. This technique has applications in the treatment of various diseases. For example, one study has shown that using focused ultrasound with oscillating bubbles loaded with a chemotherapeutic drug, carmustine, facilitates the safe treatment of glioblastoma in an animal model. This drug, like many others, normally requires large dosages to reach the target brain tissue diffusion from the blood, leading to systemic toxicity and the possibilities of multiple harmful side effects manifesting throughout the body. However, focused ultrasound has the potential to increase the safety and efficacy of drug delivery to the brain. Toxicity: A study was performed to assess the toxicity effects of doxorubicin-loaded polymeric nanoparticle systems. It was found that doses up to 400 mg/kg of PBCA nanoparticles alone did not cause any toxic effects on the organism. These low toxicity effects can most likely be attributed to the controlled release and modified biodistribution of the drug due to the traits of the nanoparticle delivery system. Toxicity is a highly important factor and limit of drug delivery studies, and a major area of interest in research on nanoparticle delivery to the brain. Toxicity: Metal nanoparticles are associated with risks of neurotoxicity and cytotoxicity. These heavy metals generate reactive oxygen species, which causes oxidative stress and damages the cells' mitochondria and endoplasmic reticulum. This leads to further issues in cellular toxicity, such as damage to DNA and disruption of cellular pathways. Silver nanoparticles in particular have a higher degree of toxicity compared to other metal nanoparticles such as gold or iron. Silver nanoparticles can circulate through the body and accumulate easily in multiple organs, as discovered in a study on the silver nanoparticle distribution in rats. Traces of silver accumulated in the rats' lungs, spleen, kidney, liver, and brain after the nanoparticles were injected subcutaneously. In addition, silver nanoparticles generate more reactive oxygen species compared to other metals, which leads to an overall larger issue of toxicity. Research: In the early 21st century, extensive research is occurring in the field of nanoparticle drug delivery systems to the brain. One of the common diseases being studied in neuroscience is Alzheimer's disease. Many studies have been done to show how nanoparticles can be used as a platform to deliver therapeutic drugs to these patients with the disease. A few Alzheimer's drugs that have been studied especially are rivastigmine, tacrine, quinoline, piperine, and curcumin. PBCA, chitosan, and PLGA nanoparticles were used as delivery systems for these drugs. Overall, the results from each drug injection with these nanoparticles showed remarkable improvements in the effects of the drug relative to non-nanoparticle delivery systems. This possibly suggests that nanoparticles could provide a promising solution to how these drugs could cross the BBB. One factor that still must be considered and accounted for is nanoparticle accumulation in the body. With long-term and frequent injections that are often required to treat chronic diseases such as Alzheimer's disease, polymeric nanoparticles could potentially build up in the body, causing undesirable effects. This area for concern would have to be further assessed to analyze these possible effects and to improve them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nitrendipine** Nitrendipine: Nitrendipine is a dihydropyridine calcium channel blocker. It is used in the treatment of primary (essential) hypertension to decrease blood pressure and can reduce the cardiotoxicity of cocaine.It was patented in 1971 and approved for medical use in 1985. Medical uses: Nitrendipine is given to hypertensive individuals in 20 mg oral tablets every day. This amount is effective in reducing blood pressure by 15–20% within 1–2 hours of administration. With long-term treatments, the dosage may rise to as much as 40 mg/day; in elderly individuals, a lower dosage of up to 5 mg/day may be equally effective (this reduction in drug amount is attributed to decreased liver function or “first pass” metabolism). Once digested, nitrendipine is absorbed into the blood and binds to plasma proteins. The majority (98%) is bound to plasma proteins and 70-80% of its inactive polar metabolites are also bound to plasma proteins. Following hepatic metabolism, 80% of the 20 mg dose can be recovered in the first 96 hours as inactive polar metabolites. The specific volume of distribution of the drug is 2-6 L/kg. In terms of drug half-life, nitrendipine has a half-life of 12–24 hours. The reported side effects include: headache, flushing, edema and palpitations. These side effects can all be attributed to the vasodilation effect of this drug. Mechanism of action: Once nitrendipine is ingested, it is absorbed by the gut and metabolized by the liver before it goes into the systemic circulation and reaches the cells of the smooth muscles and cardiac muscle cells. It binds more effectively with L-type calcium channels in smooth muscle cells because of its lower resting membrane potential. The nitrendipine diffuses into the membrane and binds to its high affinity binding site on the inactivated L-type calcium channel that's located in between each of the 4 intermembrane components of the α1 subunit. The exact mechanism of action of nitrendipine is unknown, but it is believed to have important tyrosine and threonine residues in its binding pocket and its binding interferes with the voltage sensor and gating mechanism of the channel. Thought to have a domain-interface model of binding. In hypertension, the binding of nitrendipine causes a decrease in the probability of open L-type calcium channels and reduces the influx of calcium. The reduced levels of calcium prevent smooth muscle contraction within these muscle cells. Prevention of muscle contraction enables smooth muscle dilation. Dilation of the vasculature reduces total peripheral resistance, which decreases the workload on the heart and prevents scarring of the heart or heart failure. Mechanism of action: Nitrendipine has additionally been found to act as an antagonist of the mineralocorticoid receptor, or as an antimineralocorticoid. Stereochemistry: Nitrendipine contains a stereocenter and can exist as either of two enantiomers. The pharmaceutical drug is a racemate, an equal mixture of (R)- and the (S)-forms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chromatin structure remodeling (RSC) complex** Chromatin structure remodeling (RSC) complex: RSC (Remodeling the Structure of Chromatin) is a member of the ATP-dependent chromatin remodeler family. The activity of the RSC complex allows for chromatin to be remodeled by altering the structure of the nucleosome.There are four subfamilies of chromatin remodelers: SWI/SNF, INO80, ISW1, and CHD. The RSC complex is a 15-subunit chromatin remodeling complex initially found in Saccharomyces cerevisiae, and is homologous to the SWI/SNF complex found in humans. The RSC complex has ATPase activity in the presence of DNA. RSC Complex vs. SWI/SNF: While RSC and SWI/SNF are considered homologous, RSC is significantly more common than the SWI/SNF complex and it is required for mitotic cell division. Without the RSC complex, cells would not survive. RSC consists of 15 subunits, and at least three of these subunits are conserved between RSC and SWI/SNF. RSC and SWI/SNF are composed of very similar components, such as the Sth1 components in RSC and the SWI2/Snf2p in SWI/SNF. Both of these components are ATPases that consist of Arp7 and Arp9, which are proteins that are similar to actin. The subunits of Sth1 (Rsc6p, Rsc8p, and Sfh1p) are paralogues to the three subunits of SWI/SNF (Swp73p, Swi3p, and Snf5p). While there are many similarities between these two chromatin remodeling complexes, they remodel different parts of chromatin. They also have opposing roles, specifically when interacting with the PHO8 promoter. RSC works to guarantee the placement of nucleosome N-3, while SWI/SNF attempts to override the placement of N-3.RSC and SWI/SNF complexes both function as chromatin remodeling complexes in humans (Homo sapiens) and the common fruit fly (Drosophila melanogaster). SWI/SNF was first discovered when a genetic screen was done in yeast with a mutation causing a deficiency in mating-type switching (swi) and a mutation causing a deficiency in sucrose fermentation. After this chromatin remodeling complex was discovered, the RSC complex was found when its components, Snf2 and Swi2p, were discovered to be homologous to the SWI/SNF complex. RSC Complex vs. SWI/SNF: Due to research done using BLAST (biotechnology), it is believed that the yeast RSC complex is even more similar to the human SWI/SNF complex than it is to the yeast SWI/SNF complex. The Role of RSC: The role of nucleosomes is a very important topic of research. It is known that nucleosomes interfere with the binding of transcription factors to DNA, therefore they can control transcription and replication. With the help of an in vitro experiment using yeast, it was discovered that RSC is required for nucleosome remodeling. There is evidence that RSC does not remodel the nucleosomes on its own; it uses information from enzymes to help position nucleosomes. The Role of RSC: The ATPase activity of the RSC complex is activated by single-stranded, double-stranded, and/or nucleosomal DNA, while some of the other chromatin remodeling complexes are only stimulated by one of these DNA-types.The RSC complex (specifically Rsc8 and Rsc30) is crucial when fixing double-stranded breaks via non-homologous end joining (NHEJ) in yeast. This repair mechanism is important for cell survival, as well for maintaining an organism's genome. These double-stranded breaks are typically caused by radiation, and they can be detrimental to the genome. The breaks can lead to mutations that reposition a chromosome and can even lead to the entire loss of a chromosome. The mutations associated with double-stranded breaks have been linked to cancer and other deadly genetic diseases. RSC does not only repair double-stranded breaks by NHEJ, it also repairs this breaks using homologous recombination with the help of the SWI/SNF complex. SWI/SNF is recruited first, prior to two homologous chromosomes bind, and then RSC is recruited to help complete the repair. Mechanism of Action in dsDNA: A single molecule study using magnetic tweezers and linear DNA observed that RSC generates DNA loops in vitro while simultaneously generating negative supercoils in the template. These loops can consist of hundreds of base pairs, but the length depends on how tightly the DNA is wound, as well as how much ATP is present during this translocation. Not only could RSC generate loops, but it was also able to relax these loops, meaning that the translocation of RSC is reversible.Hydrolysis of ATP allows the complex to translocate the DNA into a loop. RSC can release the loop either by translocating back to the original state at a comparable velocity, or by losing one of its two contacts. RSC components: The following is a list of RSC components that have been identified in yeast, their corresponding human orthologs, and their functions:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Encoignure** Encoignure: Encoignure is a type of furniture located in a corner of a room. In French, it literally means the angle, or return, formed by the junction of two walls. Since the 20th century, the word has been chiefly used to designate a small armoire, oakley, commode, cabinet, or cupboard made to fit a corner. A chair placed in a corner is referred to as a chaise encoignure.Originally the design came from France, hence the name: pieces in the Louis Quinze or Louis Seize style in lacquer or in mahogany, elaborately mounted in gilded bronze, are among the more alluring pieces from the period of grand French furniture. They were made in a vast variety of forms so far as the front was concerned; but are otherwise strictly limited by their destination. As a rule these delicate and dainty pieces were in pairs and placed in opposite angles; frequently the tops were finished in expensive colored marble.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trandolapril** Trandolapril: Trandolapril is an ACE inhibitor used to treat high blood pressure. It may also be used to treat other conditions. It is similar in structure to another ACE Inhibitor, Ramipril but has a cyclohexane group. It also is a pro-drug and must get metabolized. It has a longer half-life when compared to other agents in this class. It was patented in 1981, and approved for medical use in 1993. It is marketed by Abbott Laboratories under the brand name Mavik. Side effects: Side effects reported for trandolapril include nausea, vomiting, diarrhea, headache, dry cough, dizziness or lightheadedness when sitting up or standing, hypotension, or fatigue. Possible drug interactions: Patients also on diuretics may experience an excessive reduction of blood pressure after initiation of therapy with trandolapril. It can reduce potassium loss caused by thiazide diuretics, and increase serum potassium when used alone. Therefore, hyperkalemia is a possible risk. Increased serum lithium levels can occur in patients who are also on lithium. Pregnancy and lactation: Trandolapril is teratogenic (US: pregnancy category D) and can cause birth defects and even death of the developing fetus. The highest risk to the fetus is during the second and third trimesters. When pregnancy is detected, trandolapril should be discontinued as soon as possible. Trandolapril should not be administered to nursing mothers. Additional effects: Combination therapy with paricalcitol and trandolapril has been found to reduce fibrosis in obstructive uropathy. Pharmacology: Trandolapril is a prodrug that is de-esterified to trandolaprilat. It is believed to exert its antihypertensive effect through the renin–angiotensin–aldosterone system. Trandolapril has a half-life of about 6 hours, and trandolaprilat has a half life of about 10 h. Trandolaprilat has about eight times the activity of its parent drug. About one-third of trandolapril and its metabolites are excreted in the urine, and about two-thirds of trandolapril and its metabolites are excreted in the feces. Serum protein binding of trandolapril is about 80%. Mode of action: Trandolapril acts by competitive inhibition of angiotensin converting enzyme (ACE), a key enzyme in the renin–angiotensin system which plays an important role in regulating blood pressure.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Geodesic** Geodesic: In geometry, a geodesic () is a curve representing in some sense the shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line". Geodesic: The noun geodesic and the adjective geodetic come from geodesy, the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph. Geodesic: In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion. Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles. Introduction: A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function f from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from f(s) to f(t) along the curve equals |s−t|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic. Introduction: It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic. A contiguous segment of a geodesic is again a geodesic. Introduction: In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with "constant speed". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map t→t2 from the unit interval on the real number line to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant. Introduction: Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In general relativity, geodesics in spacetime describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved spacetime. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways. Introduction: This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian manifolds. The article Levi-Civita connection discusses the more general case of a pseudo-Riemannian manifold and geodesic (general relativity) discusses the special case of general relativity in greater detail. Introduction: Examples The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points, then there are infinitely many shortest paths between them. Geodesics on an ellipsoid behave in a more complicated way than on a sphere; in particular, they are not closed in general (see figure). Introduction: Triangles A geodesic triangle is formed by the geodesics joining each pair out of three points on a given surface. On the sphere, the geodesics are great circle arcs, forming a spherical triangle. Metric geometry: In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve γ : I → M from an interval I of the reals to the metric space M is a geodesic if there is a constant v ≥ 0 such that for any t ∈ I there is a neighborhood J of t in I such that for any t1, t2 ∈ J we have d(γ(t1),γ(t2))=v|t1−t2|. Metric geometry: This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parameterization, i.e. in the above identity v = 1 and d(γ(t1),γ(t2))=|t1−t2|. If the last equality is satisfied for all t1, t2 ∈ I, the geodesic is called a minimizing geodesic or shortest path. In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic. Riemannian geometry: In a Riemannian manifold M with metric tensor g, the length L of a continuously differentiable curve γ : [a,b] → M is defined by L(γ)=∫abgγ(t)(γ˙(t),γ˙(t))dt. Riemannian geometry: The distance d(p, q) between two points p and q of M is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves γ : [a,b] → M such that γ(a) = p and γ(b) = q. In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. In fact, only paths that are both locally distance minimizing and parameterized proportionately to arc-length are geodesics. Another equivalent way of defining geodesics on a Riemannian manifold, is to define them as the minima of the following action or energy functional E(γ)=12∫abgγ(t)(γ˙(t),γ˙(t))dt. Riemannian geometry: All minima of E are also minima of L, but L is a bigger set since paths that are minima of L can be arbitrarily re-parameterized (without changing their length), while minima of E cannot. Riemannian geometry: For a piecewise C1 curve (more generally, a W1,2 curve), the Cauchy–Schwarz inequality gives L(γ)2≤2(b−a)E(γ) with equality if and only if g(γ′,γ′) is equal to a constant a.e.; the path should be travelled at constant speed. It happens that minimizers of E(γ) also minimize L(γ) , because they turn out to be affinely parameterized, and the inequality is an equality. The usefulness of this approach is that the problem of seeking minimizers of E is a more robust variational problem. Indeed, E is a "convex function" of γ , so that within each isotopy class of "reasonable functions", one ought to expect existence, uniqueness, and regularity of minimizers. In contrast, "minimizers" of the functional L(γ) are generally not very regular, because arbitrary reparameterizations are allowed. Riemannian geometry: The Euler–Lagrange equations of motion for the functional E are then given in local coordinates by d2xλdt2+Γμνλdxμdtdxνdt=0, where Γμνλ are the Christoffel symbols of the metric. This is the geodesic equation, discussed below. Calculus of variations Techniques of the classical calculus of variations can be applied to examine the energy functional E. The first variation of energy is defined in local coordinates by δE(γ)(φ)=∂∂t|t=0E(γ+tφ). The critical points of the first variation are precisely the geodesics. The second variation is defined by δ2E(γ)(φ,ψ)=∂2∂s∂t|s=t=0E(γ+tφ+sψ). In an appropriate sense, zeros of the second variation along a geodesic γ arise along Jacobi fields. Jacobi fields are thus regarded as variations through geodesics. By applying variational techniques from classical mechanics, one can also regard geodesics as Hamiltonian flows. They are solutions of the associated Hamilton equations, with (pseudo-)Riemannian metric taken as Hamiltonian. Affine geodesics: A geodesic on a smooth manifold M with an affine connection ∇ is defined as a curve γ(t) such that parallel transport along the curve preserves the tangent vector to the curve, so at each point along the curve, where γ˙ is the derivative with respect to t . More precisely, in order to define the covariant derivative of γ˙ it is necessary first to extend γ˙ to a continuously differentiable vector field in an open set. However, the resulting value of (1) is independent of the choice of extension. Affine geodesics: Using local coordinates on M, we can write the geodesic equation (using the summation convention) as d2γλdt2+Γμνλdγμdtdγνdt=0, where γμ=xμ∘γ(t) are the coordinates of the curve γ(t) and Γμνλ are the Christoffel symbols of the connection ∇. This is an ordinary differential equation for the coordinates. It has a unique solution, given an initial position and an initial velocity. Therefore, from the point of view of classical mechanics, geodesics can be thought of as trajectories of free particles in a manifold. Indeed, the equation ∇γ˙γ˙=0 means that the acceleration vector of the curve has no components in the direction of the surface (and therefore it is perpendicular to the tangent plane of the surface at each point of the curve). So, the motion is completely determined by the bending of the surface. This is also the idea of general relativity where particles move on geodesics and the bending is caused by gravity. Affine geodesics: Existence and uniqueness The local existence and uniqueness theorem for geodesics states that geodesics on a smooth manifold with an affine connection exist, and are unique. More precisely: For any point p in M and for any vector V in TpM (the tangent space to M at p) there exists a unique geodesic γ : I → M such that γ(0)=p and γ˙(0)=V, where I is a maximal open interval in R containing 0.The proof of this theorem follows from the theory of ordinary differential equations, by noticing that the geodesic equation is a second-order ODE. Existence and uniqueness then follow from the Picard–Lindelöf theorem for the solutions of ODEs with prescribed initial conditions. γ depends smoothly on both p and V. Affine geodesics: In general, I may not be all of R as for example for an open disc in R2. Any γ extends to all of ℝ if and only if M is geodesically complete. Affine geodesics: Geodesic flow Geodesic flow is a local R-action on the tangent bundle TM of a manifold M defined in the following way Gt(V)=γ˙V(t) where t ∈ R, V ∈ TM and γV denotes the geodesic with initial data γ˙V(0)=V . Thus, Gt (V) = exp(tV) is the exponential map of the vector tV. A closed orbit of the geodesic flow corresponds to a closed geodesic on M. Affine geodesics: On a (pseudo-)Riemannian manifold, the geodesic flow is identified with a Hamiltonian flow on the cotangent bundle. The Hamiltonian is then given by the inverse of the (pseudo-)Riemannian metric, evaluated against the canonical one-form. In particular the flow preserves the (pseudo-)Riemannian metric g , i.e. g(Gt(V),Gt(V))=g(V,V). In particular, when V is a unit vector, γV remains unit speed throughout, so the geodesic flow is tangent to the unit tangent bundle. Liouville's theorem implies invariance of a kinematic measure on the unit tangent bundle. Geodesic spray The geodesic flow defines a family of curves in the tangent bundle. The derivatives of these curves define a vector field on the total space of the tangent bundle, known as the geodesic spray. More precisely, an affine connection gives rise to a splitting of the double tangent bundle TTM into horizontal and vertical bundles: TTM=H⊕V. The geodesic spray is the unique horizontal vector field W satisfying π∗Wv=v at each point v ∈ TM; here π∗ : TTM → TM denotes the pushforward (differential) along the projection π : TM → M associated to the tangent bundle. Affine geodesics: More generally, the same construction allows one to construct a vector field for any Ehresmann connection on the tangent bundle. For the resulting vector field to be a spray (on the deleted tangent bundle TM \ {0}) it is enough that the connection be equivariant under positive rescalings: it need not be linear. That is, (cf. Ehresmann connection#Vector bundles and covariant derivatives) it is enough that the horizontal distribution satisfy HλX=d(Sλ)XHX for every X ∈ TM \ {0} and λ > 0. Here d(Sλ) is the pushforward along the scalar homothety Sλ:X↦λX. Affine geodesics: A particular case of a non-linear connection arising in this manner is that associated to a Finsler manifold. Affine geodesics: Affine and projective geodesics Equation (1) is invariant under affine reparameterizations; that is, parameterizations of the form t↦at+b where a and b are constant real numbers. Thus apart from specifying a certain class of embedded curves, the geodesic equation also determines a preferred class of parameterizations on each of the curves. Accordingly, solutions of (1) are called geodesics with affine parameter. Affine geodesics: An affine connection is determined by its family of affinely parameterized geodesics, up to torsion (Spivak 1999, Chapter 6, Addendum I). The torsion itself does not, in fact, affect the family of geodesics, since the geodesic equation depends only on the symmetric part of the connection. More precisely, if ∇,∇¯ are two connections such that the difference tensor D(X,Y)=∇XY−∇¯XY is skew-symmetric, then ∇ and ∇¯ have the same geodesics, with the same affine parameterizations. Furthermore, there is a unique connection having the same geodesics as ∇ , but with vanishing torsion. Affine geodesics: Geodesics without a particular parameterization are described by a projective connection. Computational methods: Efficient solvers for the minimal geodesic problem on surfaces posed as eikonal equations have been proposed by Kimmel and others. Ribbon Test: A Ribbon "Test" is a way of finding a geodesic on a physical surface. The idea is to fit a bit of paper around a straight line (a ribbon) onto a curved surface as closely as possible without stretching or squishing the ribbon (without changing its internal geometry). For example, when a ribbon is wound as a ring around a cone, the ribbon would not lie on the cone's surface but stick out, so that circle is not a geodesic on the cone. If the ribbon is adjusted so that all its parts touch the cone's surface, it would give an approximation to a geodesic. Ribbon Test: Mathematically the ribbon test can be formulated as finding a mapping f:N(l)→S of a neighborhood N of a line l in a plane into a surface S so that the mapping f "doesn't change the distances around l by much"; that is, at the distance ε from l we have gN−f∗(gS)=O(ε2) where gN and gS are metrics on N and S Applications: Geodesics serve as the basis to calculate: geodesic airframes; see geodesic airframe or geodetic airframe geodesic structures – for example geodesic domes horizontal distances on or near Earth; see Earth geodesics mapping images on surfaces, for rendering; see UV mapping particle motion in molecular dynamics (MD) computer simulations robot motion planning (e.g., when painting car parts); see Shortest path problem
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Fresh Vegetable Mystery** The Fresh Vegetable Mystery: The Fresh Vegetable Mystery is a 1939 Color Classics cartoon. It was released on September 29, 1939. Plot: It’s late at night in the kitchen, and all the vegetables are asleep when a cloaked figure arrives and kidnaps Mother Carrot and her kids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Straw hat** Straw hat: A straw hat is a wide-brimmed hat woven out of straw or straw-like synthetic materials. Straw hats are a type of sun hat designed to shade the head and face from direct sunlight, but are also used in fashion as a decorative element or a uniform. Materials: Commonly used fibers are: Wheat straw: (Milan straw, Tuscan, Livorno), Rye straw: used for the traditional bryl straw hats popular among the peasants of Belarus, southwestern Russia and Ukraine. Toquilla straw: flexible and durable fiber, which is often made into hats, known as Panama hats, in Ecuador. Materials: Buntal/ Parabuntal straw: from unopened Palm leaves or stems of the Buri Palm, Baku straw: 1x1 woven, made from the young stalks of the Talipot palm from Malabar and Ceylon, Braided hemp, Raffia, Shantung straw: made from high-performance paper which is rolled into a yarn to imitate straw, historically it was made of buntal Toyo straw: cellophane coated Washi, Bangora straw: made from a lower grade of Washi, Paperbraids: made from different paper strands from viscose from different Plants (Swiss Paglinastraw), (Silkpaper, Rice paper), Sisal/ Parasisal (2x2 woven sisal), Seagrass (Xian), Visca straw: an artificial straw made by spinning viscose in a flat filament capable of being braided, woven, or knitted and used especially for women's hats, Rush straw: a thick, stiff straw, used to manufacture inexpensive casual sun hats, made from rush grass (Juncus effesus, Juncus polycephalus), from the bulrushtypes sedge grass (Schoenoplectus lacustris, Cyperus papyrus, Typha (Typha domingensis, syn. Thypha angustata) (bulrush or cattail)} and other types seashore rushgrass (Sporobolus virginicus) or reed Jute, Abacá: (for Sinamay hats) Ramie, Artificial, synthetic straw, PP straw: made from Polypropylene, Polyethylene or from different blends from Acrylic, PP, PE, Polyester, Ramie and Paper other straw fibers that are mostly used in Asian conical hats are made from different palms (Corypha, Rattan, Trachycarpus, Phoenix), grasses Cane, Bamboo and rice straw (Kasa (hat)) Chip straw: from White pine, Lombardy poplar, or English willow, has historically been used, but has become less common. Manufacture: There are several styles of straw hats, but all of them are woven using some form of plant fibre. Many of these hats are formed in a similar way to felt hats; they are softened by steam or by submersion in hot water, and then formed by hand or over a hat block. Finer and more expensive straw hats have a tighter and more consistent weave. Since it takes much more time to weave a larger hat than a smaller one, larger hats are more expensive. History: Straw hats have been worn in Africa and Asia since after the Middle Ages during the summer months, and have changed little between the medieval times and today. They are worn, mostly by men, by all classes. Many can be seen in the calendar miniatures of the Très Riches Heures du Duc de Berry. The mokorotlo, a local design of a straw hat, is the national symbol of the Basotho and Lesotho peoples, and of the nation of Lesotho. It is displayed on Lesotho license plates. President Theodore Roosevelt posed for a series of photos at the Panama Canal construction site in 1906. He was portrayed as a strong, rugged leader dressed crisply in light-colored suits and stylish straw fedoras. This helped popularize the straw "Panama hat". Types of straw hats: Boater hat – a formal straw hat with a flat top and brim. Buntal hat – a semi-formal or informal traditional straw hat from the Philippines made from buntal fiber Conical hat – the distinctive hat worn primarily by farmers in Southeast Asia Panama hat – a fine and expensive hat made in Ecuador. Sombrero Vueltiao - A straw hat with intricate patterns made from caña flecha by the Zenú people of Colombia. Salakot – a traditional conical or pointed rounded hat made usually made from rattan from the Philippines. It can also be made from gourds, tortoiseshell, or other fibers and weaving materials. Arts: Artwork produced during the Middle Ages shows, among the more fashionably dressed, possibly the most spectacular straw hats ever seen on men in the West, notably those worn in the Arnolfini Portrait of 1434 by Jan van Eyck (tall, stained black) and by Saint George in a painting by Pisanello of around the same date (left). In the middle of the 18th century, it was fashionable for rich ladies to dress as country girls with a low crowned and wide brimmed straw hat to complete the look.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transpiration stream** Transpiration stream: In plants, the transpiration stream is the uninterrupted stream of water and solutes which is taken up by the roots and transported via the xylem to the leaves where it evaporates into the air/apoplast-interface of the substomatal cavity. It is driven by capillary action and in some plants by root pressure. The main driving factor is the difference in water potential between the soil and the substomatal cavity caused by transpiration. Transpiration: Transpiration can be regulated through stomatal closure or opening. It allows for plants to efficiently transport water up to their highest body organs, regulate the temperature of stem and leaves and it allows for upstream signaling such as the dispersal of an apoplastic alkalinization during local oxidative stress. Summary of water movement: Soil Roots and Root Hair Xylem Leaves Stomata Air Osmosis: The water passes from the soil to the root by osmosis. The long and thin shape of root hairs maximizes surface area so that more water can enter. There is greater water potential in the soil than in the cytoplasm of the root hair cells. As the cell's surface membrane of the root hair cell is semi-permeable, osmosis can take place; and water passes from the soil to the root hairs. Osmosis: The next stage in the transpiration stream is water passing into the xylem vessels. The water either goes through the cortex cells (between the root cells and the xylem vessels) or it bypasses them – going through their cell walls. Osmosis: After this, the water moves up the xylem vessels to the leaves through diffusion: A pressure change between the top and bottom of the vessel. Diffusion takes place because there is a water potential gradient between water in the xylem vessel and the leaf (as water is transpiring out of the leaf). This means that water diffuses up the leaf. There is also a pressure change between the top and bottom of the xylem vessels, due to water loss from the leaves. This reduces the pressure of water at the top of the vessels. This means water moves up the vessels. Osmosis: The last stage in the transpiration stream is the water moving into the leaves, and then the actual transpiration. First, the water moves into the mesophyll cells from the top of the xylem vessels. Then the water evaporates out of the cells into the spaces between the cells in the leaf. After this, the water leaves the leaf (and the whole plant) by diffusion through stomata.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PelicanHPC** PelicanHPC: PelicanHPC is an operating system based on Debian Live, which provides a rapid means of setting up a high performance computer cluster.PelicanHPC was formerly known as ParallelKNOPPIX.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hazardous energy** Hazardous energy: Hazardous energy in occupational safety and health is any source of energy (including electrical, mechanical, thermal, chemical, hydraulic, and pneumatic sources of energy) that "can be hazardous to workers", such as from discharge of stored energy. Failure to control the unexpected release of energy can lead to machine-related injuries or fatalities. The risk from these sources of energy can be controlled in a number of ways, including access control procedures such as lockout-tagout.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2-Pentyne** 2-Pentyne: 2-Pentyne, an organic compound, is an internal alkyne. It is an isomer of 1-pentyne, a terminal alkyne. Synthesis: 2-Pentyne can be synthesized by the rearrangement 1-pentyne in a solution of ethanolic potassium hydroxide or NaNH2/NH3.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Txtng: the Gr8 Db8** Txtng: the Gr8 Db8: Txtng: The Gr8 Db8 is a 2008 book about text messaging, by linguist David Crystal. Txtng: the Gr8 Db8: The title is a logogram which stands for Texting: The Great Debate. In his book, Crystal examines the use of text messaging and its effect on language and literacy. Based on research and experimental results, he disagrees with the popular view that the use of abbreviations and slang, such as those in SMS language, will lead to low literacy and bad spelling among children. Main points: Crystal put forward the following points in his book: Typically, less than 10% of the words are abbreviated in text messages. Abbreviating is not a new language; instead, it has been present for many decades. Children and adults both use SMS language, the latter being more likely to do so. Students do not habitually use abbreviations in their homework or examinations. Sending text messages is not a cause of bad spelling because people need to know how to spell before they can send a text message. Sending text messages improves people's literacy, as it provides more opportunity for people to engage with their language through reading and writing. The last point seems to be especially useful for school-age children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hash filter** Hash filter: A hash filter creates a hash sum from data, typically e-mail, and compares the sum against other previously defined sums. Depending on the purpose of the filter, the data can then be included or excluded in a function based on whether it matches an existing sum. Hash filter: For example, when a message is received by an e-mail server with a hash filter, the contents of the e-mail is converted into a hash sum. If this sum corresponds to the hash sum of another e-mail which has been categorized as spam, the received e-mail is prevented from being delivered. Spammers attempt to evade this by adding random strings to the text content and random pixel changes ("confetti") to image content (see image spam).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snaith's theorem** Snaith's theorem: In algebraic topology, a branch of mathematics, Snaith's theorem, introduced by Victor Snaith, identifies the complex K-theory spectrum with the localization of the suspension spectrum of CP∞ away from the Bott element.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD 107148 b** HD 107148 b: HD 107148 b is a jovian exoplanet with minimum mass of only 70% that of Saturn. Unlike Saturn, it orbits much closer to the star. The planetary orbit was significantly refined in the 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shark cage diving** Shark cage diving: Shark cage diving is underwater diving or snorkeling where the observer remains inside a protective cage designed to prevent sharks from making contact with the divers. Shark cage diving is used for scientific observation, underwater cinematography, and as a tourist activity. Sharks may be attracted to the vicinity of the cage by the use of bait, in a procedure known as chumming, which has attracted some controversy as it is claimed to potentially alter the natural behaviour of sharks in the vicinity of swimmers. Shark cage diving: Similar cages are also used purely as a protective measure for divers working in waters where potentially dangerous shark species are known to be present. In this application the shark-proof cage may be used as a refuge, or as a diving stage during descent and ascent, particularly during staged decompression where the divers may be vulnerable while constrained to a specific depth in mid-water for several minutes. In other applications a mobile cage may be carried by the diver while harvesting organisms such as abalone. Shark-proof cage: A shark-proof cage is a metal cage used by an underwater diver, to observe dangerous types of sharks up close or to harvest seafood in relative safety. Of the various species of shark, thosee most commonly observed from a cage are the great white shark and the bull shark, which are both known to be aggressive at times. Shark-proof cages are built to withstand being rammed and bitten by sharks, and are intended to protect the user from injury. Cages can provide a visual and tactile deterrent to sharks. Cage-diving allows people to closely monitor sharks for scientific, commercial or recreational purposes, and sometimes interact with them. Shark-proof cage: The shark-proof cage is also used in the controversial exercise of shark baiting, where tourists are lowered in a cage while the tour guides bait the water to attract sharks or stimulate certain behavior. Shark-proof cage: Early development Shark cages were first developed by Jacques Cousteau. Cousteau used a shark cage when producing The Silent World, released in 1956. Australian recreational diver and shark-attack survivor Rodney Fox helped develop a shark-observation cage in the late 1960s. Fox's first design was inspired by a visit to a zoo he made after surviving a near-fatal shark attack in 1963. Film-maker Peter Gimbel was involved in the design of a shark-proof cage for the production of Blue Water, White Death (1971). Shark-proof cage: Self-propelled version In 1974, after several reported shark attacks on working divers in Australia, Australian abalone diver James "Jim" Ellis developed a self-propelled cage to protect abalone divers from sharks, which he patented in 1975. Mounting the motor in gimbals in the front of the cage makes the vehicle highly maneuverable. Movement and speed are controlled with a "joystick". Shark-proof cage: The design allowed abalone divers to work without becoming vulnerable to attack. Due to the propulsion system, the divers would exert themselves less and, therefore, might be able to collect molluscs for longer periods. The patent abstract details a self-propelled cage with at least one access opening and a mounting frame that carries both an air motor and a propeller. Buoyant material is attached to the frame so that the cage may be made neutrally buoyant. The diver can control warm water piped to the diver's suit in cold environments. Propulsion was later changed to hydraulics supplied from the boat through the diver's umbilical. The patent expired in 1996, although Ellis continued to make improvements.A 1975 version of the cage was acquired by the Australian National Maritime Museum in 1988. Shark-cage diving tourism: During the 2000s, shark-cage diving became more popular as a tourist activity. In South Australia, tourists are taken by boat from Port Lincoln to the Neptune Islands in the southern Spencer Gulf, where they view great white sharks either from a cage tethered to the back of a boat near the surface or from a cage lowered to the seabed. The government considers the activity to be one of South Australia's “iconic nature-based tourism experiences”, which supports 70 jobs and contributes over $11 million to the state's economy. Shark-cage diving tourism: Shark baiting Shark baiting is a procedure where the water is baited by chumming with fish or other materials attractive to sharks. Tourists remain inside a shark-proof cage while tour guides bait the waters to attract sharks for the tourists to observe. There have been claims that this could lead to potentially aggressive behavior by the shark population. Some conservation groups, scuba divers, and underwater photographers consider the practice undesirable and potentially dangerous. Shark-cage diving tourism: In South Australia, abalone divers have been attacked by great white sharks, and divers believe that great white shark cage diving tourism has altered shark behavior including making them more inclined to approach boats. At least one abalone diver, Peter Stephenson has called for a ban on shark-cage diving and described it as a "major workplace safety issue”. The government of South Australia claims that there is "no scientific evidence" to suggest that the general public is at elevated risk of shark attack as a result of shark cage tourism.Opponents of the cage-diving industry, such as shark-survivor Craig Bovim, who was reportedly bitten by a ragged-tooth shark (a species not targeted by cage diving operators in the region, and not generally considered a hazard to divers) while snorkeling for lobster at Scarborough, on the other side of the Cape Peninsula from Seal Island, where the shark cage boats operate, believe that the repeated chumming used to lure sharks to tourist cages may alter sharks' behaviour. Bovim's opponents, such as marine environmentalist Wilfred Chivell, contend that there is no demonstrated correlation between shark-baiting and shark attacks against humans. However, there is evidence that the baiting of sharks for tourism does alter the patterns of movement of Great White Sharks. Shark cage diving incidents: In 2005, a British tourist, Mark Currie, was exposed to a high risk of injury or death when a 5-metre (18 ft) great white shark bit through the bars of a shark cage being used during a recreational shark dive off the coast of South Africa. The shark circled the boat several times, and began to attack the side of the cage, then started to crush and bite through. The captain attempted to free the cage by trying to distract the shark. He did this by hitting it on the head with an iron pole. The shark bit into one of the buoys at the top of the cage, which caused the cage to begin sinking. Currie quickly swam out of the top of the cage and was pulled to safety by the boat's captain, who fended off the shark with blows to its head.In 2007, a commercial shark cage was destroyed off the coast of Guadalupe Island after a 4.6-metre (15 ft) great white shark became entangled and tore the cage apart in a frantic effort to free itself. Tourists captured video of the incident, which quickly spread on the Internet.On April 13, 2008, there was a fatal capsize of a shark cage diving boat off the coast near Gansbaai, South Africa where three tourists died - two Americans and one Norwegian. The cage diving vessel was anchored on the Geldsteen reef near Dyer Island (South Africa) and engaged in shark cage diving viewing activities when it was capsized by a large wave estimated at about 6m in a swell estimated at about 4m significant wave height height. The boat engines were off while anchored over the reef, and the skipper was at the back of the vessel handling the bait line attracting sharks towards the cage. A videographer was in the cage at the time of capsize filming underwater footage for the DVDs sold as a tour souvenir. During the capsize, most of the 19 people on board were thrown overboard. There was no head count of survivors and three tourists were trapped underneath the capsized hull for more than forty minutes before it was realized that passengers were still unaccounted for. It was claimed by one of the defendants during the 2014 Western Cape High Court trial in Cape Town, South Africa that the wave that capsized the vessel was a freak wave, but statistically it was probably simply a larger wave in a 4 meter swell, that picked up over the reef. Since the vessel was anchored over a reef with engines off, it made a larger than normal wave more likely to break near the boat, and a capsize more difficult to avoid. There were other breaking waves shown in photos and videos which showed the increasing danger. The trial judge ruled that the skipper and boat owners were guilty of negligence. Being anchored over a reef in a large sea in dangerous conditions was ruled as the primary reason for the capsize and death of the three tourists, Shark cage diving was incidental, but was the reason for the vessel to have remained anchored over a shallow reef, with engines off, despite increasing swells and breaking waves. If sharks had not been present and if the videographer had not still been in the cage filming they would have probably have already left.Another incident reported in 2016 occurred off the coast of Mexico, when a shark that lunged for the bait broke into the cage and the diver was able to escape uninjured.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AMD FireMV** AMD FireMV: AMD FireMV, formerly ATI FireMV, is brand name for graphics cards marketed as a Multi-Display 2D video card, with 3D capabilities same as the low-end Radeon graphics products. It competes directly with Matrox professional video cards. FireMV cards aims at the corporate environment who require several displays attached to a single computer. FireMV cards has options of dual GPU, a total of four display output via a VHDCI connector, or single GPU, a total of two display output via a DMS-59 connector. AMD FireMV: FireMV cards are available for PCI and PCI Express interfaces. Although these are marketed by ATI as mainly 2D cards, the FireMV 2250 cards support OpenGL 2.0 since it is based on the RV516 GPU found in the Radeon X1000 Series released 2005.The FireMV 2260 is the first video card to carry dual DisplayPort output in the workstation 2D graphics market, sporting DirectX 10.1 support.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**KCNC2** KCNC2: Potassium voltage-gated channel subfamily C member 2 is a protein that in humans is encoded by the KCNC2 gene. The protein encoded by this gene is a voltage-gated potassium channel subunit (Kv3.2). Expression pattern: Kv3.1 and Kv3.2 channels are prominently expressed in neurons that fire at high frequency. Kv3.2 channels are prominently expressed in brain (fast-spiking GABAergic interneurons of the neocortex, hippocampus, and caudate nucleus; terminal fields of thalamocortical projections), and in retinal ganglion cells. Physiological role: Kv3.1/Kv3.2 conductance is necessary and kinetically optimized for high-frequency action potential generation. Sometimes in heteromeric complexes with Kv3.1; important for the high-frequency firing of fast spiking GABAergic interneurons and retinal ganglion cells; and GABA release via regulation of action potential duration in presynaptic terminals. Pharmacological properties: Kv3.2 currents in heterologous systems are highly sensitive to external tetraethylammonium (TEA) or 4-aminopyridine (4-AP) (IC50 values are 0.1 mM for both of the drugs). This can be useful in identifying native channels. Transcript variants: There are four transcript variants of Kv3.2 gene: Kv3.2a, Kv3.2b, Kv3.2c, Kv3.2d. Kv3.2 isoforms differ only in their C-terminal sequence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenBinder** OpenBinder: OpenBinder is a system for inter-process communication. It was developed at Be Inc. and then Palm, Inc. and was the basis for the Binder framework now used in the Android operating system developed by Google.OpenBinder allows processes to present interfaces which may be called by other threads. Each process maintains a thread pool which may be used to service such requests. OpenBinder takes care of reference counting, recursion back into the original thread, and the inter-process communication itself. On the Linux version of OpenBinder, the communication is achieved using ioctls on a given file descriptor, communicating with a kernel driver. OpenBinder: The kernel-side component of the Linux version of OpenBinder was merged into the Linux kernel mainline in kernel version 3.19, which was released on February 8, 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Büchner–Curtius–Schlotterbeck reaction** Büchner–Curtius–Schlotterbeck reaction: The Buchner–Curtius–Schlotterbeck reaction is the reaction of aldehydes or ketones with aliphatic diazoalkanes to form homologated ketones. It was first described by Eduard Buchner and Theodor Curtius in 1885 and later by Fritz Schlotterbeck in 1907. Two German chemists also preceded Schlotterbeck in discovery of the reaction, Hans von Pechmann in 1895 and Viktor Meyer in 1905. The reaction has since been extended to the synthesis of β-keto esters from the condensation between aldehydes and diazo esters. The general reaction scheme is as follows: The reaction yields two possible carbonyl compounds (I and II) along with an epoxide (III). The ratio of the products is determined by the reactant used and the reaction conditions. Reaction mechanism: The general mechanism is shown below. The resonating arrow (1) shows a resonance contributor of the diazo compound with a lone pair of electrons on the carbon adjacent to the nitrogen. The diazo compound then does a nucleophilic attack on the carbonyl-containing compound (nucleophilic addition), producing a tetrahedral intermediate (2). This intermediate decomposes by the evolution of nitrogen gas forming the tertiary carbocation intermediate (3). Reaction mechanism: The reaction is then completed either by the reformation of the carbonyl through an 1,2-rearrangement or by the formation of the epoxide. There are two possible carbonyl products: one formed by migration of R1 (4) and the other by migration of R2 (5). The relative yield of each possible carbonyl is determined by the migratory preferences of the R-groups. The epoxide product is formed by an intramolecular addition reaction in which a lone pair from the oxygen attacks the carbocation (6). Reaction mechanism: This reaction is exothermic due to the stability of nitrogen gas and the carbonyl containing compounds. This specific mechanism is supported by several observations. First, kinetic studies of reactions between diazomethane and various ketones have shown that the overall reaction follows second order kinetics. Additionally, the reactivity of two series of ketones are in the orders Cl3CCOCH3 > CH3COCH3 > C6H5COCH3 and cyclohexanone > cyclopentanone > cycloheptanone > cyclooctanone. These orders of reactivity are the same as those observed for reactions that are well established as proceeding through nucleophilic attack on a carbonyl group. Scope and variation: The reaction was originally carried out in diethyl ether and routinely generated high yields due to the inherent irreversibly of the reaction caused by the formation of nitrogen gas. Though these reactions can be carried out at room temperature, the rate does increase at higher temperatures. Typically, the reaction is carried out at less than refluxing temperatures. The optimal reaction temperature is determined by the specific diazoalkane used. Reactions involving diazomethanes with alkyl or aryl substituents are exothermic at or below room temperature. Reactions involving diazomethanes with acyl or aroyl substituents require higher temperatures. The reaction has since been modified to proceed in the presence of Lewis acids and common organic solvents such as THF and dichloromethane. Reactions generally run at room temperature for about an hour, and the yield ranges from 70%-80% based on the choice of Lewis acid and solvent. Scope and variation: Steric effects Steric effects of the alkyl substituents on the carbonyl reactant have been shown to affect both the rates and yields of Büchner–Curtius–Schlotterbeck reaction. Table 1 shows the percent yield of the ketone and epoxide products as well as the relative rates of reaction for the reactions between several methyl alkyl ketones and diazomethane. The observed decrease in rate and increase in epoxide yield as the size of the alkyl group becomes larger indicates a steric effect. Electronic effects Ketones and aldehydes with electron-withdrawing substituents react more readily with diazoalkanes than those bearing electron-donating substituents (Table 2). In addition to accelerating the reaction, electron-withdrawing substituents typically increase the amount of epoxide produced (Table 2). Scope and variation: The effects of substituents on the diazoalkanes is reversed relative to the carbonyl reactants: electron-withdrawing substituents decrease the rate of reaction while electron-donating substituents accelerate it. For example, diazomethane is significantly more reactive than ethyl diazoacetate, though less reactive than its higher alkyl homologs (e.g. diazoethane). Reaction conditions may also affect the yields of carbonyl product and epoxide product. In the reactions of o-nitrobenzaldehyde, p-nitrobenzaldehyde, and phenylacetaldehyde with diazomethane, the ratio of epoxide to carbonyl is increased by the inclusion of methanol in the reaction mixture. The opposite influence has also been observed in the reaction of piperonal with diazomethane, which exhibits increased carbonyl yield in the presence of methanol. Scope and variation: Migratory preferences The ratio of the two possible carbonyl products (I and II) obtained is determined by the relative migratory abilities of the carbonyl substituents (R1 and R2). In general, the R-group most capable of stabilizing the partial positive charge formed during the rearrangement migrates preferentially. A prominent exception to this general rule is hydride shifting. The migratory preferences of the carbonyl R-groups can be heavily influenced by solvent and diazoalkane choice. For example, methanol has been shown to promote aryl migration. As shown below, if the reaction of piperanol (IV) with diazomethane is carried out in the absence of methanol, the ketone obtained though a hydride shift is the major product (V). If methanol is the solvent, an aryl shift occurs to form the aldehyde (VI), which cannot be isolated as it continues to react to form the ketone (VII) and the epoxide (VIII) products. Scope and variation: The diazoalkane employed can also determine relative yields of products by influencing migratory preferences, as conveyed by the reactions of o-nitropiperonal with diazomethane and diazoethane. In the reaction between o-nitropiperonal (IX) and diazomethane, an aryl shift leads to production of the epoxide (X) in 9 to 1 excess of the ketone product (XI). When diazoethane is substituted for diazomethane, a hydride shift produces the ketone (XII), the only isolable product. Examples in the literature: The Büchner–Curtius–Schlotterbeck reaction can be used to facilitate one carbon ring expansions when the substrate ketone is cyclic. For instance, the reaction of cyclopentanone with diazomethane forms cyclohexanone (shown below). The Büchner ring expansion reactions utilizing diazoalkanes have proven to be synthetically useful as they can not only be used to form 5- and 6-membered rings, but also more unstable 7- and 8-membered rings. Examples in the literature: An acyl-diazomethane can react with an aldehyde to form a β-diketone in the presence of a transition metal catalyst (SnCl2 in the example shown below). β-Diketones are common biological products, and as such, their synthesis is relevant to biochemical research. Furthermore, the acidic β-hydrogens of β-diketones are useful for broader synthetic purposes, as they can be removed by common bases. Examples in the literature: Acyl-diazomethane can also add to esters to form β-keto esters, which are important for fatty acid synthesis. As mentioned above, the acidic β-hydrogens also have productive functionality. The Büchner–Curtius–Schlotterbeck reaction can also be used to insert a methylene bridge between a carbonyl carbon and a halogen of an acyl halide. This reaction allows conservation of the carbonyl and halide functionalities. It is possible to isolate nitrogen-containing compounds using the Büchner–Curtius–Schlotterbeck reaction. For example, an acyl-diazomethane can react with an aldehyde in the presence of a DBU catalyst to form isolable α-diazo-β-hydroxy esters (shown below).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P2PRIV** P2PRIV: Peer-to-peer direct and anonymous distribution overlay (P2PRIV) was a conceptual anonymous peer-to-peer overlay network introduced at Warsaw University of Technology in 2007. P2PRIV hides an initiator of communications by a parallelization of network nodes receiving or sending user data independently. This concept is contrary to other anonymity networks topologies. The anonymity networks employ a serial communication as a common basis and hide the initiator in a cascade of network nodes forwarding user data consecutively. The main advantage of P2PRIV is viewed as a possibility of providing high-speed anonymous data transfer while anonymous data can be sent directly and independently in the distributed network. P2PRIV: The last update on the web site was in 2009. The official web site is down since 2013. Description: P2PRIV separates anonymization from user data transport. Before sending data, signalization tokens are forwarded over classical anonymous cascades towards formation of so-called cloning cascades (CC). The well-known anonymous techniques (i.e. Mix network and Crowds' Random walk algorithm) are utilized in hiding the initiator of the CC. Then, after a random interval of time, each CC member (i.e. group of clones and the true initiator) communicates directly and independently with destination nodes. A process of finding the true initiator among network nodes is hard to perform even for an adversary able to collude a significant part of overlay network. Weaknesses: P2PRIV requires a fully distributed network with distributed information content to assure high-anonymous access to its resources. A utility of P2PRIV in client-server like services, e.g., World Wide Web system, or in hybrid P2P topologies, is problematic in its current form.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ventral tegmental area** Ventral tegmental area: The ventral tegmental area (VTA) (tegmentum is Latin for covering), also known as the ventral tegmental area of Tsai, or simply ventral tegmentum, is a group of neurons located close to the midline on the floor of the midbrain. The VTA is the origin of the dopaminergic cell bodies of the mesocorticolimbic dopamine system and other dopamine pathways; it is widely implicated in the drug and natural reward circuitry of the brain. The VTA plays an important role in a number of processes, including reward cognition (motivational salience, associative learning, and positively-valenced emotions) and orgasm, among others, as well as several psychiatric disorders. Neurons in the VTA project to numerous areas of the brain, ranging from the prefrontal cortex to the caudal brainstem and several regions in between. Structure: Neurobiologists have often had great difficulty distinguishing the VTA in humans and other primate brains from the substantia nigra (SN) and surrounding nuclei. Originally, the ventral tegmental area was designated as a ‘nucleus’, but over time ‘area’ became the more appropriate term used because of the heterogeneous cytoarchitectonic features of the region and the lack of clear borders that separate it from adjacent regions. Because of the selective limbic-related afferents to the VTA, the cells of the VTA are given the designation A10 to differentiate them from surrounding cells. Structure: Location The ventral tegmental area is in the midbrain between several other major areas, some of which are described here. The mammillary bodies and the posterior hypothalamus, both included in the diencephalon, extend rostrally from the VTA. The red nucleus is situated laterally and oculomotor fibers are situated ventromedially to the VTA. The pons and the hindbrain lie caudally to the VTA. Finally, the substantia nigra is located laterally to the VTA. Structure: Subdivisions In 1987, Oades identified four primary nuclei in the VTA A10 group of cells: the nucleus paranigralis (Npn), the nucleus parabrachialis pigmentosus (Npbp), the nucleus interfascicularis (Nif), and the nucleus linearis (Nln) caudalis and rostralis. Presently, scientists divide the VTA up into four similar zones that are called the paranigral nucleus (PN), the parabrachial pigmented area (PBP), the parafasciculus retroflexus area (PFR), and the rostromedial tegmental nucleus (RMTg), which approximately adhere to the previous divisions. Some definitions of the VTA also include the midline nuclei (i.e. the interfascicular nucleus, rostral linear nucleus, and central linear nucleus). Structure: The PN and PBP are rich in dopaminergic cells, whereas the other two regions have low densities of these neurons. The PFR and RMTg contain a low density of tyrosine hydroxylase (TH)-positive cell bodies that are small in size and lightly stain; the RMTg is composed mostly of GABAergic cells. On the other hand, the PN and PBP consist mainly of medium to large sized TH-positive cell bodies that stain moderately. Structure: Inputs Almost all areas receiving projections from the VTA project back to it. Thus, the ventral tegmental area is reciprocally connected with a wide range of structures throughout the brain suggesting that it has a role in the control of function in the phylogenetically newer and highly developed neocortex, as well as that of the phylogenetically older limbic areas.The VTA is a heterogeneous region consisting of a variety of neurons that are characterized by different neurochemical and neurophysiological properties. Therefore, glutamatergic and GABAergic inputs are not exclusively inhibitory nor exclusively excitatory. The VTA receives glutamatergic afferents from the prefrontal cortex, pedunculopontine tegmental nucleus (PPTg), laterodorsal tegmental nucleus, subthalamic nucleus, bed nucleus of the stria terminalis, superior colliculus, periaqueductal gray, lateral habenula, dorsal raphe nucleus, and lateral hypothalamic and preoptic areas. These glutamatergic afferents play a key role in regulating VTA cell firing. When the glutamatergic neurons are activated, the firing rates of the dopamine neurons increase in the VTA and induce burst firing. Studies have shown that these glutamatergic actions in the VTA are critical to the effects of drugs of abuse. In contrast, the tail of the ventral tegmental area (tVTA, a.k.a. the RMTg) projects to the VTA with GABAergic afferents, functioning as a "master brake" for the VTA dopamine pathways.GABAergic inputs to the VTA also include the nucleus accumbens, ventral pallidum, dorsal raphe nucleus, lateral hypothalamus, periaqueductal gray, bed nucleus of the stria terminalis, and rostromedial tegmental nucleus (RMTg). The lateral habenula can also exert an inhibitory effect on dopaminergic neurons in the VTA by exciting RMTg GABAergic neurons, which is thought to play an important role in reward prediction errors. Subpallidal afferents into the VTA are mainly GABAergic and, thus, inhibitory. There is a substantial pathway from the subpallidal area to the VTA. When this pathway is disinhibited, an increase in the dopamine release in the mesolimbic pathway amplifies locomotor activity.There are also cholinergic inputs to the VTA, although less studied than the glutamatergic and GABAergic inputs. Optogenetic studies in mice looking at cholinergic inputs from the pedunculopontine tegmental nucleus (PPTg) and the laterodorsal tegmental nucleus demonstrate that these circuits reinforce the discharge properties of VTA neurons, suggesting a modulatory influence on reward circuits. Structure: Outputs The two primary efferent fiber projections of the VTA are the mesocortical and the mesolimbic pathways, which correspond to the prefrontal cortex and nucleus accumbens respectively. In addition, experiments in rodents have identified a mesohabenular pathway consisting of VTA neurons that do not release dopamine, but glutamate and GABA. Other VTA projections, which utilize dopamine as their primary neurotransmitter, are listed below. Structure: Ventral tegmental area (VTA) projectionsVTA → Amygdala VTA → Entorhinal cortex VTA → Cingulate gyrus VTA → Hippocampus VTA → Nucleus accumbens VTA → Olfactory bulb VTA → Prefrontal cortex Development Because they develop from common embryonic tissue and partly overlap in their projection fields, Dopaminergic cell groups lack clear anatomical boundaries. During the development of the mammalian brain, both substantia nigra (SN) and VTA neurons initially project to the dorsolateral and ventromedial striatum. However, at birth the SN dopaminergic neurons project exclusively into the dorsolateral striatum, and the VTA dopaminergic neurons project solely into the ventromedial striatum. This pruning of connections occurs through the elimination of the unnecessary collaterals. Function: As stated above, the VTA, in particular the VTA dopamine neurons, serve several functions in the reward system, motivation, cognition, and drug addiction, and may be the focus of several psychiatric disorders. The VTA has also been shown to process various types of emotion output from the amygdala, where it may also play a role in avoidance and fear-conditioning. Electrophysiological recordings have demonstrated that VTA neurons respond to novel stimuli, unexpected rewards, and reward-predictive sensory cues. The firing pattern of these cells is consistent with the encoding of a reward expectancy error. Function: In 2006, MRI studies by Helen Fisher and her research team found and documented various emotional states relating to intense love correlated with activity in the VTA, which may help explain obsessive behaviors of rejected partners, since this is shared by the reward system. Nest sharing behavior is associated with increased V1aR expression in the VTA of newly paired zebra finches. However, V1aR expression was not related to female directed song rates, which may indicate a selective role of vasotocin in the VTA on pair maintenance versus courtship behavior. Function: Presence of gap junctions The VTA has been shown to have a large network of GABAergic neurons that are interconnected via gap junctions. This network allows for electrical conduction, which is considerably faster than the chemical conduction of signals between synapses, though less spatially precise. Function: Neural composition The VTA, like the substantia nigra, is populated with melanin-pigmented dopaminergic neurons. Recent studies have suggested that dopaminergic neurons comprise 50-60% of all neurons in the VTA, which is contrary to previous evidence that noted 77% of neurons within the VTA to be dopaminergic. In addition, there is a sizable population of GABAergic neurons in the rostromedial tegmental nucleus (RMTg), a functionally distinct brain structure. These GABAergic neurons regulate the firing of their dopaminergic counterparts that send projections throughout the brain to, but not limited to, the following regions: the prefrontal cortex, the nucleus accumbens, and the locus coeruleus. The VTA also contains a small percentage of excitatory glutamatergic neurons. Function: Limbic loop The “limbic loop” is very similar to the direct pathway motor loop of the basal ganglia. In both systems, there are major excitatory inputs from the cortex to the striatum (accumbens nucleus), the midbrain projects neuromodulatory dopamine neurons to the striatum, the striatum makes internuclear connections to the pallidum, and the pallidum has outputs to the thalamus, which projects to the cortex, thus completing the loop. The limbic loop is distinguished from the motor loop by the source and nature of the cortical input, the division of the striatum and pallidum that process the input, the source of the dopaminergic neurons from the midbrain, and the thalamic target of the pallidal output. Function: The limbic loop controls cognitive and affective functioning and the motor loop controls movement. Function: CA3 loop Linking context to reward is important for reward seeking. In 2011, a group of researchers documented a CA3-VTA connection that uses the lateral septum as an intermediary. They used a pseudo-rabies virus (PRV) as a transsynaptic tracer, and injected it into the VTA. They found that unilateral injection into the VTA resulted in bilateral PRV labeling in CA3 beginning 48 hours after injection. Lesions of the caudodorsal lateral septum (cd-LS) before VTA PRV injection resulted in significantly less PRV labeled neurons in CA3. Theta wave stimulation of CA3 resulted in increased firing rates for dopamine cells in the VTA, and decreased firing rates for GABA neurons in the VTA. The identity of VTA neurons was confirmed by neurobiotin™ labeling of the recording neuron, and then histological staining for tyrosine hydroxylase (TH). Temporary inactivation of CA3 via GABA agonists prevented context induced reinstatement of lever pressing for intravenous cocaine.The authors propose a functional circuit loop where activation of glutamatergic cells in CA3 causes activation of GABAergic cells in cd-LS, which inhibits GABA interneurons in the VTA, releasing the dopamine cells from the tonic inhibition, and leading to an increased firing rate for the dopamine cells. Function: Reward system The dopamine reward circuitry in the human brain involves two projection systems from the ventral midbrain to the nucleus accumbens-olfactory tubercle complex. First, the posteromedial VTA and central linear raphe cells selectively project to the ventromedial striatum, which includes the medial olfactory tubercle and the medial NAC shell. Second, the lateral VTA projects largely to the ventrolateral striatum, which includes the NAC core, the medial NAC shell, and the lateral olfactory tubercle. These pathways are called the meso-ventromedial and the meso-ventrolateral striatal dopamine systems, respectively. The medial projection system is important in the regulation of arousal characterized by affect and drive and plays a different role in goal-directed behavior than the lateral projection system. Unlike the lateral part, the medial one is activated not by rewarding but by noxious stimuli. Therefore, the NAC shell and the posterior VTA are the primary areas involved in the reward system. Clinical significance: Disorders The dopaminergic neurons of the substantia nigra and the ventral tegmental area of the midbrain project to the dorsolateral caudate/putamen and to the ventromedially located nucleus accumbens, respectively, establishing the mesostriatal and the mesolimbic pathways. The close proximity of these two pathways causes them to be grouped together under dopaminergic projections. Several disorders result from the disruption of these two pathways: schizophrenia, Parkinson's disease, and attention deficit hyperactivity disorder (ADHD). Current research is examining the subtle difference between the neurons that are involved in these conditions and trying to find a way to selectively treat a specific dopamine projection. Clinical significance: Drug addiction The nucleus accumbens and the ventral tegmental area are the primary sites where addictive drugs act. The following are commonly considered to be addictive: cocaine, alcohol, opioids, nicotine, cannabinoids, amphetamine, and their analogs. These drugs alter the neuromodulatory influence of dopamine on the processing of reinforcement signals by prolonging the action of dopamine in the nucleus accumbens or by stimulating the activation of neurons there and also in the VTA. The most common drugs of abuse stimulate the release of dopamine, which creates both their rewarding and the psychomotor effects. Compulsive drug-taking behaviors are a result of the permanent functional changes in the mesolimbic dopamine system arising from repetitive dopamine stimulation. Molecular and cellular adaptations are responsible for a sensitized dopamine activity in the VTA and along the mesolimbic dopamine projection in response to drug abuse. In the VTA of addicted individuals, the activity of the dopamine-synthesizing enzyme tyrosine hydroxylase increases, as does the ability of these neurons to respond to excitatory inputs. The latter effect is secondary to increases in the activity of the transcription factor CREB and the up regulation of GluR1, an important subunit of AMPA receptors for glutamate. These alterations in neural processing could account for the waning influence of adaptive emotional signals in the operation of decision making faculties as drug-seeking and drug-taking behaviors become habitual and compulsive. Clinical significance: Experiments in rats have shown that they learn to press a lever for the administration of stimulant drugs into the posterior VTA more readily than into the anterior VTA. Other studies have shown that microinjections of dopaminergic drugs into the nucleus accumbens shell increase locomotor activity and exploratory behaviors, conditioned approach responses, and anticipatory sexual behaviors. The withdrawal phenomenon occurs because the deficit in reward functioning initiates a distress cycle wherein the drugs become necessary to restore the normal homeostatic state. Recent research has shown that even after the final stages of withdrawal have been passed, drug-seeking behavior can be restored if exposed to the drug or drug-related stimuli. Comparative anatomy and evolution: All studies since 1964 have emphasized the impressive general similarity between the VTA of all mammals from rodents to humans. These studies have focused their efforts on rats, rabbits, dogs, cats, opossum, non-human primates, and humans. There have been slight differences noted, such as changes in the dorsal extent of the A10 cells. To be specific, the dorsal peak of A10 cells is more extensive in primates when compared to other mammals. Furthermore, the number of dopaminergic cells in the VTA increases with phylogenetic progression; for instance, the VTA of the mouse contains approximately 25,000 neurons, while the VTA of a 33-year-old man contains around 450,000 cell bodies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DCTP diphosphatase** DCTP diphosphatase: In enzymology, a dCTP diphosphatase (EC 3.6.1.12) is an enzyme that catalyzes the chemical reaction dCTP + H2O ⇌ dCMP + diphosphateThus, the two substrates of this enzyme are dCTP and H2O, whereas its two products are dCMP and diphosphate. This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dCTP nucleotidohydrolase. Other names in common use include deoxycytidine-triphosphatase, dCTPase, dCTP pyrophosphatase, deoxycytidine triphosphatase, deoxy-CTPase, and dCTPase. This enzyme participates in pyrimidine metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dress boot** Dress boot: Dress boots are short leather boots typically worn by men. Built like dress shoes, but with uppers covering the ankle, versions of the boots are used as an alternative to these in bad weather or rough outdoor situation, and as a traditional option for day time formalwear. History: Until the Victorian period, long riding boots were common and dress boots were for more formal occasions, so patent leather was often used, as well as ordinary black calf. Gradually, these boots became more common for formal evening use, so that by the Edwardian era, patent boots were generally worn when there would be no dancing. Patent leather use during day dropped, and formal morning clothes soon incorporated either shoes or plain calf dress boots. In the evening, the wearing of both boots and court slippers similarly declined as shoes came to dominate, though slippers are still worn with white tie. History: As the use of riding boots declined with the advent of cars (automobiles), another use for these short boots developed as tougher alternatives to shoes for harsh weather or terrain, where hobnails would originally have been worn with sturdier versions of town boots. Now that shoes are so much more common than boots, and formal clothing is worn so infrequently, this is now the most frequent use of dress boots. Form: Formal boots With formalwear, dress boots are now only worn during the day, and are usually now black Oxford boots of Balmoral cut (in the English, not American, sense, i.e. the only seam descending to the welt is that of the toe-cap). The upper is usually softer, made of canvas or suede. Alternatively, the same Balmoral vamp is used with a button-fastened upper instead of using the more modern system of shoe laces. The tolerance for fit with this kind of boot is less, so they are more expensive, though more traditional, and a button-hook on the end of a shoehorn may be needed to do up the buttons. The traditional use of buff for the uppers is rare. The boots sometimes have toe-caps, which can feature a brogued seam, a reference to their original informal use for business. Form: Outdoor boots Although constructed similarly to formal boots, casual or informal boots for harsher conditions sacrifice elegance for practicality, using double leather soles, or even rubber ones. The uppers are made of the same strong leather as the vamp, and tougher materials like cordovan may be used. Walking boots in this style can have open lacing, and broguing is used as on country shoes. The main colour is brown. Boots of this design were issued to the British and US armies in the 19th century: the US Civil War-era Jeff Davis boots had hobnails and the British ammunition boots remained in service until World War II. More recently, boots of this style have seen a revival as part of the Neo-Edwardian fashion popular among British indie kids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DDoS mitigation** DDoS mitigation: DDoS mitigation is a set of network management techniques and/or tools, for resisting or mitigating the impact of distributed denial-of-service (DDoS) attacks on networks attached to the Internet, by protecting the target, and relay networks. DDoS attacks are a constant threat to businesses and organizations, by delaying service performance, or by shutting down a website entirely.DDoS mitigation works by identifying baseline conditions for network traffic by analyzing "traffic patterns", to allow threat detection and alerting. DDoS mitigation also requires identifying incoming traffic, to separate human traffic from human-like bots and hijacked web browsers. This process involves comparing signatures and examining different attributes of the traffic, including IP addresses, cookie variations, HTTP headers, and JavaScript fingerprints. DDoS mitigation: After the detection is made, the next process is filtering. Filtering can be done through anti-DDoS technology like connection tracking, IP reputation lists, deep packet inspection, blacklisting/whitelisting, or rate limiting.One technique is to pass network traffic addressed to a potential target network through high-capacity networks, with "traffic scrubbing" filters.Manual DDoS mitigation is no longer recommended, due to the size of attacks often outstripping the human resources available in many firms/organizations. Other methods to prevent DDoS attacks can be implemented, such as on-premises and/or cloud-based solution providers. On-premises mitigation technology (most commonly a hardware device) is often placed in front of the network. This would limit the maximum bandwidth available to what is provided by your Internet service provider. Common methods involve hybrid solutions, by combining on-premises filtering with cloud-based solutions. Methods of attack: DDoS attacks are executed against websites and networks of selected victims. A number of vendors offer "DDoS-resistant" hosting services, mostly based on techniques similar to content delivery networks. Distribution avoids a single point of congestion and prevents the DDoS attack from concentrating on a single target. One technique of DDoS attacks is to use misconfigured third-party networks, allowing the amplification of spoofed UDP packets. Proper configuration of network equipment, enabling ingress filtering and egress filtering, as documented in BCP 38 and RFC 6959, prevents amplification and spoofing, thus reducing the number of relay networks available to attackers. Methods of mitigation: Use of Client Puzzle Protocol, or Guided tour puzzle protocol Use of Content Delivery Network Blacklist of IP addresses Use of Intrusion detection system and Firewall
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dept. of Computer Science, University of Delhi** Dept. of Computer Science, University of Delhi: The Department of Computer Science, University of Delhi is a department in the University of Delhi under the Faculty of Mathematical Science, set up in 1981. Courses: The department started the three years Master of Computer Applications (MCA) program in 1982, which was among the first such programs in India. The department started M.Sc. Computer Science course in 2004.Besides these the department has research interests in Computer Science and offers a Doctor of Philosophy (Ph.D.) program. The university conducts a postgraduate Diploma in Computer Applications (PGDCA) program through its constituent colleges. Emphasis is laid not only on the theoretical concepts but also on practical experience and industry interaction. Few classroom projects: MCA Apart from classroom teaching, students take up case studies, presentations and small projects. Following are some projects/assignments taken up by the students: Implementation of Unix Shell Implementation of Chat Server. Simulation of machine language code and implementation of assembler. Simulation of the basic file system on Linux. Simulation of Sliding Window Protocols Go-Back N Protocol Selective Repeat Protocol. Simulation of a two-pass assembler. Projects designed, documented and coded using SDLC Share tracker system. Computerized health care system. Websites on tourism, online FIR, online book store, online examination, social networking, online shipping management system, digital library system. Research and implementation of cryptographic algorithms Design and implementation of new approach for searching in encrypted data using Bloom Filter. Analysis and implementation of security algorithms in Cloud Computing. Malware and keylogger design. Software and hardware implementation of Smart Home System. Misuse, detection and prevention of Advance Spamming techniques. Design and security analysis of chaotic encryption. Analysis of risks, techniques, and corporate usage of Web 2.0 technologies. Implementation of homomorphic encryption algorithms. Regional language encryption and translation. Implementation of elliptic curve cryptography. Design and implementation of self synchronizing stream ciphers.M.Sc. Computer Science As part of the curriculum students give presentations, group projects and programming assignments. The following are some of the projects/assignments taken up by the students: Implementation of robot task assignment given resources using MATLAB. Jade programming for agent communication. Implementation of DES encryption and decryption algorithm. Application of genetic algorithm in 8-queens problem. Implementation of K-means, FP-Tree, BIRCH and DBSCAN algorithm using C++. Generating all strong association rules from a set of given frequent item sets of transactions. Implementation of DBMS. Data preprocessing and KDD (Knowledge Discovery and Data mining) using WEKA and C4.5. Implementation of clustering techniques on output of fuzzy C-means algorithm as initial input using MATLAB. Simulation of Lexical Analyzer and Parser using C. Infrastructure: The students of the department are affiliated to two libraries. The Departmental Library: is a reference library with over four thousand titles, in the field of Computer Science and IT and in related areas such as Electronics and Mathematics. The Central Science Library: is one of the largest science libraries in India. It was established in 1981, and has 2,20,000 volumes of books and periodicals. The website of CSL provides electronic subscription for 27,000 e-journals including IEEE, ACM, Springer journals and proceedings. Internet ConnectionAll the labs, offices and faculty rooms of the Department are connected to the internet through the university intranet. Internet connectivity is provided using 4 switches through the university intranet. 24 port switch is used in LAN, providing internet to all systems in the laboratory, classrooms, seminar room and committee room.Delhi University Computer Centre Notable alumni: Kiran Sethi - VP, Deutsche Bank, USA Pradeep Mathur - VP, Capgemini, UK Gulshan Kumar - Director, Alcatel-Lucent, India Ranjan Dhar - Director, Silicon Graphics, India Manish Madan - VP, Perot Systems, TSI, India Sachin Wadhwa - Head Operations, Mastech InfoTrellis Inc, USA Kumaran Sasikanthan -Country Head, AllSight Software, India
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torque tube** Torque tube: A torque tube system is a power transmission and braking technology that involves a stationary housing around the drive shaft, often used in automobiles with a front engine and rear drive. The torque tube consists of a large diameter stationary housing between the transmission and rear end that fully encloses a rotating tubular steel or small-diameter solid drive shaft (known colloquially in the U.S. as a "rope drive") that transmits the power of the engine to a regular or limited-slip differential. The purpose of a torque tube is to hold the rear end in place during acceleration and braking. Otherwise, the axle housing would suffer axle wrap, such that the front of the differential would lift up excessively during acceleration and sink down during braking. Its use is not as widespread in modern automobiles as is the Hotchkiss drive, which holds the rear end in place and prevents it from flipping up or down, during acceleration and braking, by anchoring the axle housings to the leaf springs using spring perches. Construction: The "torque" that is referred to in the name is not that of the driveshaft, along the axis of the car, but that applied by the wheels. The engineering problem that the torque tube solves is how to get the traction forces generated by the wheels to the car frame. The torque moving the wheels and axles in a forward direction is met with an "equal and opposite" reaction of the axle housing and differential, making the differential want to spin in a reverse direction, in the same way that a cyclist "pops a wheelie", lifting the bicycle in the air in the opposite direction from the turn of the wheel. The essential problem is how to keep the differential from rotating during acceleration and braking. The torque tube solves that problem by coupling the differential housing to the transmission housing and therefore propels the car forward by pushing up on the engine/transmission and then through the engine mounts to the car frame, with the reverse happening during braking. In contrast, the Hotchkiss drive transmits the traction forces to the car frame by using suspension components such as leaf springs or trailing arms.A type of ball and socket joint called a "torque ball" is used at one end of the torque tube to allow relative motion between the axle and transmission due to suspension travel. Later American Motors Rambler models (1962 through 1966) used a flange and cushion mount in place of the ball and socket. Since the torque tube does not constrain the car's body to the axle in the lateral (side-to-side) direction a panhard rod is often used for this purpose. The combination of the panhard rod and the torque tube allows the easy implementation of soft coil springs in the rear to give good ride quality, as in Buicks after 1937. Before 1937, Buicks used leaf springs, so the panhard rod was not used, though the torque tube allowed a cantilever spring suspension, which gives a softer ride than a center-mount axle on the leaf spring, as required by the Hotchkiss setup. Construction: In addition to transmitting traction forces, the torque tube is hollow and contains the rotating driveshaft. Inside the hollow torque ball is the universal joint of the driveshaft that allows relative motion between the two ends of the driveshaft. In most applications the drive shaft uses a single universal joint which has the disadvantage that it causes speed fluctuations in the driveshaft when the shaft is not straight. The Hotchkiss drive uses two universal joints, which has the effect of canceling the speed fluctuations and gives a constant speed even when the shaft is no longer straight. V8-powered models of the 1963-1966 AMC Rambler used a double-Cardan constant velocity joint to eliminate driveshaft fluctuations, though six-cylinder and earlier V8 models used only one standard universal joint.The torque tube design is typically heavier and securely ties the rear end together, thus providing for a rigid rear end and assuring good alignment under all conditions. However, because of the greater unsprung weight of the torque tube and radius rods, there may be a "little hopping around of the rear end when cornering fast or on washboard roads". Application: Examples of the torque tube were the American cars of the Ford brand up through 1948, including over 19,000,000 Model Ts. Ford used the less expensive transverse springs that could not take forward thrust. For many of those years, Chevrolet used the torque tube, while Buick used it starting in 1906 (in the model D). The torque tube also allowed Buick, beginning in 1938, to use coil springs for a softer ride than traditional leaf springs, which can use a Hotchkiss drive, but coil springs cannot. Buick's use of a torque tube and coil springs became a Buick "engineering trademark", until it was dropped with the 1961 model year full-sized models. The Nash 600 model adopted torque-tube drive in 1941 without an enclosed joint, but utilized a "horizontal yoke at the front end of the torque tube is supported by rubber biscuits at each side."After the merger of Nash and Hudson in 1954, American Motors Corporation (AMC) continued to use the coil spring and torque tube rear suspension design on their large-sized cars (Rambler Classic and Ambassador) from the 1956 through the 1966 model year. The enclosed driveshaft made for more complicated gear swaps and hampered hot rodders. The discontinued torque-tube drive was replaced by a completely new open driveshaft and four-link axle-location system.The 1961 Pontiac Tempest was introduced as a new model, featuring an inline 4 coupled to a transaxle via a torque tube, giving it a perfect 50-50 front-rear weight balance. Application: The Peugeot 403 and 404 models used a torque tube. The Peugeot 504, and Peugeot 505 estate/station wagons, as well as most export-market sedans also had torque tubes, while domestic and European-market sedan models had a transaxle and individual rear suspension. Application: The Chevrolet Chevette (1976-1988) and the similar Pontiac T-1000 used a torque tube and center bearing. This design was unlike any other Chevrolet model "to isolate impacts to the rear wheels, cut down on road noise, and reduce engine vibration ... also allows a reduction in the height of the drive shaft and tunnel."The continuing limited production of the Avanti switched to a new chassis in 1986 that utilized a torque tube along with an independent rear suspension.The Mercedes SLS has a torque tube, but only to align the transaxle with the engine.The Chevrolet Corvette has used a torque tube since the 1996 introduction of the C5 version in the 1997 model year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Connective tissue nevus** Connective tissue nevus: A connective tissue nevus is a skin lesion which may be present at birth or appear within the first few years of life. It is elevated, soft to firm in consistency, varying in size from 0.5 to several centimeters in diameter, and may manifest as grouped, linear, or irregularly-distributed lesions.: 993
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heirloom Project** Heirloom Project: The Heirloom Project is a collection of traditional Unix utilities. Most of them are derived from original Unix source code, as released as open-source by Caldera and Sun. The project has the following components: The Heirloom Toolchest: awk, cpio, grep, tar, pax, etc. The Heirloom Bourne Shell sh The Heirloom Documentation Tools: nroff, troff, dpost, etc. Heirloom Project: The Heirloom Development Tools: lex, yacc, m4, and SCCS Heirloom mailx The Heirloom Packaging Tools: pkgadd, pkgmk, etc.Although in general the intention of the project is to provide versions of Unix programs whose behavior mimics that of the classic versions, some improvements have been made. In particular, many of the Heirloom programs have been adapted to handle UTF-8 Unicode. Most programs have both a classic version and a POSIX conformant variant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,1'-Dilithioferrocene** 1,1'-Dilithioferrocene: 1,1'-Dilithioferrocene is the organoiron compound with the formula Fe(C5H4Li)2. It is exclusively generated and isolated as a solvate, using either ether or tertiary amine ligands bound to the lithium centers. Regardless of the solvate, dilithioferrocene is used commonly to prepare derivatives of ferrocene. Synthesis and reactions: Treatment of ferrocene with butyl lithium gives 1,1'-dilithioferrocene, regardless of the stoichiometry (monolithioferrocene requires special conditions for its preparation). Typically the lithiation reaction is conducted in the presence of tetramethylethylenediamine (tmeda). The adduct [Fe(C5H4Li)2]3(tmeda)2 has been crystallized from such solutions. Recrystallization of this adduct from thf gives [Fe(C5H4Li)2]3(thf)6.1,1'-Dilithioferrocene reacts with a variety of electrophiles to afford disubstituted derivatives of ferrocene. These electrophiles include S8 (to give 1,1'-ferrocenetrisulfide), chlorophosphines, and chlorosilanes. Synthesis and reactions: The diphosphine ligand 1,1'-bis(diphenylphosphino)ferrocene (dppf) is prepared by treating dilithioferrocene with chlorodiphenylphosphine. Monolithioferrocene: The reaction of ferrocene with one equivalent of butyllithium mainly affords dilithioferrocene. Monolithioferrocene can be obtained using tert-butyllithium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DK'Tronics** DK'Tronics: DK'Tronics Ltd (stylised as dk'tronics) was a British software and hardware company active during the 1980s. It primarily made peripherals for the ZX Spectrum and Amstrad CPC but also released video games for the ZX81, ZX Spectrum, Commodore 64, VIC-20, BBC Micro, Memotech MTX, MSX and Amstrad platforms. History: The company's first product was a 16Kb expansion pack for the ZX80, released just prior to the launch of the ZX81. At this time the company consisted only of David Heelas, working part-time through his interest in electronics. When the ZX81 was launched, he went full-time manufacturing, packaging and posting from his home – and by the end of 1981 he had four employees. Hardware production expanded to include new keyboards for the ZX81 and for the newly released ZX Spectrum. History: By 1984, DK'Tronics had around 50 personnel, with Heelas as managing director. He was also looking into the possibility of becoming a computer manufacturer, specifically with a low-cost processor for the leisure market. It was planned to have an integrated screen and music keyboard. DK'Tronics published games between 1982 and 1985, and included works from programmers such as Don Priestley, who became a director of the company in 1983. David Heelas was known to be critical of the hype attempted by other software companies in the gaming press and took pride in the professional position adopted by DK'Tronics. Games: Meteoroids (1982) 3D Tanx (1982) Dictator (1983) Spawn of Evil (1983) Maziacs (1983) Jumbly (1983) Zig Zag (1984) Minder (1985) Popeye (1985) Benny Hill's Madcap Chase (1985)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Research balloon** Research balloon: Research balloons are balloons that are used for scientific research. They are usually unmanned, filled with a lighter-than-air gas like helium, and fly at high altitudes. Meteorology, atmospheric research, astronomy, and military research may be conducted from a research balloon. Weather balloons are a type of research balloon. Research balloons usually study a single aspect of science, such as air pollution, air temperature, or wind currents, although sometimes several experiments or equipment are flown together. Research balloon: Other than weather balloons, few research balloons are launched every year. This is driven by the large cost of the balloon, the instrument, which is usually custom made, and the cost of the launch. Because of the altitude reached by most research balloons, the air is too thin and too cold for humans to survive, therefore most research balloons are unmanned and operated remotely. There have been some balloons equipped with pressurized cabins, beginning with professor Auguste Piccard in the 1930s. Research balloon: Research balloons are not only used on earth. With the help of a research balloon, the upper atmosphere of Venus was examined by the Vega program.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dewcell** Dewcell: Dewcells, dewcels or dew cell are instruments used for determining the dew point. They consist of a small heating element surrounded by a solution of lithium chloride. As the LiCl absorbs moisture from the air, conduction across the heating element increases, current in it increases, and heat increases, evaporating moisture from the salt solution. At a certain temperature the amount of moisture absorbed by the salt solution equals the amount evaporated (equilibrium). Dewcell: Inside the dewcell core a thermistor composite (or other temperature measurement device) changes electrical resistance with the temperature created by the heating. A front end processor provides a reference voltage, measures the output of the network, and calculates the dew point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tracing (software)** Tracing (software): In software engineering, tracing involves a specialized use of logging to record information about a program's execution. This information is typically used by programmers for debugging purposes, and additionally, depending on the type and detail of information contained in a trace log, by experienced system administrators or technical-support personnel and by software monitoring tools to diagnose common problems with software. Tracing is a cross-cutting concern. Tracing (software): There is not always a clear distinction between tracing and other forms of logging, except that the term tracing is almost never applied to logging that is a functional requirement of a program (therefore excluding logging of data from an external source, such as data acquisition in a high-energy physics experiment, and write-ahead logging). Logs that record program usage (such as a server log) or operating-system events primarily of interest to a system administrator (see for example Event Viewer) fall into a terminological gray area. Tracing (software): This article deals primarily with tracing for debugging or diagnostic purposes. Event logging versus tracing: Difficulties in making a clear distinction between event logging and software tracing arise from the fact that some of the same technologies are used for both, and further because many of the criteria that distinguish between the two are continuous rather than discrete. The following table lists some important, but by no means precise or universal, distinctions that are used by developers to select technologies for each purpose, and that guide the separate development of new technologies in each area: Event logging: Event logging provides system administrators with information useful for diagnostics and auditing. The different classes of events that will be logged, as well as what details will appear in the event messages, are often considered early in the development cycle. Many event logging technologies allow or even require each class of event to be assigned a unique "code", which is used by the event logging software or a separate viewer (e.g., Event Viewer) to format and output a human-readable message. This facilitates localization and allows system administrators to more easily obtain information on problems that occur. Event logging: Because event logging is used to log high-level information (often failure information), performance of the logging implementation is often less important. A special concern, preventing duplicate events from being recorded "too often" is taken care of through event throttling. Software tracing: Software tracing provides developers with information useful for debugging. This information is used both during development cycles and after the release of the software. Unlike event logging, software tracing usually does not have the concept of a "class" of event or an "event code". Other reasons why event-logging solutions based on event codes are inappropriate for software tracing include: Because software tracing is low-level, there are often many more types of messages that would need to be defined, many of which would only be used at one place in the code. The event-code paradigm introduces significant development overhead for these "one-shot" messages. Software tracing: The types of messages that are logged are often less stable through the development cycle than for event logging. Because the tracing output is intended to be consumed by the developer, the messages don't need to be localized. Keeping tracing messages separate from other resources that need to be localized (such as event messages) is therefore important. There are messages that should never be seen. Software tracing: Tracing messages should be kept in the code, because they can add to the readability of the code. This is not always possible or feasible with event-logging solutions.Another important consideration for software tracing is performance. Because software tracing is low-level, the possible volume of trace messages is much higher. To address performance concerns, it often must be possible to turn off software tracing, either at compile-time or run-time. Software tracing: Other special concerns: In proprietary software, tracing data may include sensitive information about the product's source code. If tracing is enabled or disabled at run-time, many methods of tracing require the inclusion of a significant amount of additional data in the binary, which can indirectly hurt performance even when tracing is disabled. If tracing is enabled or disabled at compile-time, getting trace data for a problem on a customer machine depends on the customer being willing and able to install a special, tracing-enabled version of the software and then duplicating the problem. Many uses of tracing have very stringent robustness requirements. This is both in the robustness of the trace output but also in that the use-case being traced should not be disrupted. In operating systems, tracing is sometimes useful in situations (such as booting) where some of the technologies used to provide event logging may not be available. in embedded software, tracing requires special techniques. Techniques: Software tracing: Tracing macros Output to debugger Aspect-oriented programming and related instrumentation techniques Windows software trace preprocessor (aka WPP) FreeBSD and SmartOS tracing with DTrace – traces the kernel and the userland Linux kernel tracing with ftrace Linux system-level and user-level tracing with kernel markers and LTTng Linux application tracing with UST – part of the same project as LTTng Linux C/C++ application tracing with cwrap Tracing with GNU Debugger's trace commandEvent logging: syslog (see article for specific implementations)Appropriate for both: Instruction set simulation
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fundamentals of Physics** Fundamentals of Physics: Fundamentals of Physics is a calculus-based physics textbook by David Halliday, Robert Resnick, and Jearl Walker. The textbook is currently in its 12th edition (published October, 2021). The current version is a revised version of the original 1960 textbook Physics for Students of Science and Engineering by Halliday and Resnick, which was published in two parts (Part I containing Chapters 1-25 and covering mechanics and thermodynamics; Part II containing Chapters 26-48 and covering electromagnetism, optics, and introducing quantum physics). A 1966 revision of the first edition of Part I changed the title of the textbook to Physics.It is widely used in colleges as part of the undergraduate physics courses, and has been well known to science and engineering students for decades as "the gold standard" of freshman-level physics texts. In 2002, the American Physical Society named the work the most outstanding introductory physics text of the 20th century. Fundamentals of Physics: The first edition of the book to bear the title Fundamentals of Physics, first published in 1970, was revised from the original text by Farrell Edwards and John J. Merrill. (Editions for sale outside the USA have the title Principles of Physics.) Walker has been the revising author since 1990. In the more recent editions of the textbook, beginning with the fifth edition, Walker has included "checkpoint" questions. These are conceptual ranking-task questions that help the student before embarking on numerical calculations. The textbook covers most of the basic topics in physics: Mechanics Waves Thermodynamics Electromagnetism Optics Special RelativityThe extended edition also contains introductions to topics such as quantum mechanics, atomic theory, solid-state physics, nuclear physics and cosmology. A solutions manual and a study guide are also available. In popular culture: A copy of Fundamentals of Physics (3rd edition) appears on the bookshelf in Leonard and Sheldon's apartment in The Big Bang Theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hand acupuncture** Hand acupuncture: Koryo hand acupuncture is a modern system of acupuncture, created by Yu Tae-u in the 1970s, in which the hand represents the entire body and is needled or stimulated during treatment. Koryo hand acupuncture is popular among the general population as a form of self-medication in Korea, and has adherents in Japan and North America; it is also popular among overseas Koreans. Korean hand acupuncture is different from American hand reflexology, another form of alternative medicine. One of the main differences between the two forms of alternative therapies is that they each use a different hand microsystem, which is the idea that specific areas of the hand correspond to specific areas of the body. Korean hand acupuncturists believe the entire body can be mapped on each hand, whereas their Western counterparts believe each hand represents only one side of the body.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LayerWalker** LayerWalker: LayerWalker Technology, Inc. is a fabless integrated circuit design company that announced a network storage system on a chip (SoC). Their products targeted digital home, small business and consumer electronics markets. LayerWalker introduced in 2007 the miniSAN product that provided ATA over Ethernet (AoE) server functions and management capabilities. Client software and drivers for Windows and Linux operating systems were offered.LayerWalker had offices in Taipei. It was founded in 2005 and had a web site through 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Feces** Feces: Feces (or faeces; SG: faex) are the solid or semi-solid remains of food that was not digested in the small intestine, and has been broken down by bacteria in the large intestine. Feces contain a relatively small amount of metabolic waste products such as bacterially altered bilirubin, and dead epithelial cells from the lining of the gut.Feces are discharged through the anus or cloaca during defecation. Feces: Feces can be used as fertilizer or soil conditioner in agriculture. They can also be burned as fuel or dried and used for construction. Some medicinal uses have been found. In the case of human feces, fecal transplants or fecal bacteriotherapy are in use. Urine and feces together are called excreta. Characteristics: The distinctive odor of feces is due to skatole, and thiols (sulfur-containing compounds), as well as amines and carboxylic acids. Skatole is produced from tryptophan via indoleacetic acid. Decarboxylation gives skatole.The perceived bad odor of feces has been hypothesized to be a deterrent for humans, as consuming or touching it may result in sickness or infection. Characteristics: Physiology Feces are discharged through the anus or cloaca during defecation. This process requires pressures that may reach 100 millimetres of mercury (3.9 inHg) (13.3 kPa) in humans and 450 millimetres of mercury (18 inHg) (60 kPa) in penguins. The forces required to expel the feces are generated through muscular contractions and a build-up of gases inside the gut, prompting the sphincter to relieve the pressure and release the feces. Ecology: After an animal has digested eaten material, the remains of that material are discharged from its body as waste. Although it is lower in energy than the food from which it is derived, feces may retain a large amount of energy, often 50% of that of the original food. This means that of all food eaten, a significant amount of energy remains for the decomposers of ecosystems. Ecology: Many organisms feed on feces, from bacteria to fungi to insects such as dung beetles, who can sense odors from long distances. Some may specialize in feces, while others may eat other foods. Feces serve not only as a basic food, but also as a supplement to the usual diet of some animals. This process is known as coprophagia, and occurs in various animal species such as young elephants eating the feces of their mothers to gain essential gut flora, or by other animals such as dogs, rabbits, and monkeys. Ecology: Feces and urine, which reflect ultraviolet light, are important to raptors such as kestrels, who can see the near ultraviolet and thus find their prey by their middens and territorial markers.Seeds also may be found in feces. Animals who eat fruit are known as frugivores. An advantage for a plant in having fruit is that animals will eat the fruit and unknowingly disperse the seed in doing so. This mode of seed dispersal is highly successful, as seeds dispersed around the base of a plant are unlikely to succeed and often are subject to heavy predation. Provided the seed can withstand the pathway through the digestive system, it is not only likely to be far away from the parent plant, but is even provided with its own fertilizer. Ecology: Organisms that subsist on dead organic matter or detritus are known as detritivores, and play an important role in ecosystems by recycling organic matter back into a simpler form that plants and other autotrophs may absorb once again. This cycling of matter is known as the biogeochemical cycle. To maintain nutrients in soil it is therefore important that feces returns to the area from which they came, which is not always the case in human society where food may be transported from rural areas to urban populations and then feces disposed of into a river or sea. Human feces: Depending on the individual and the circumstances, human beings may defecate several times a day, every day, or once every two or three days. Extensive hardening of the feces that interrupts this routine for several days or more is called constipation. Human feces: The appearance of human fecal matter varies according to diet and health. Normally it is semisolid, with a mucus coating. A combination of bile and bilirubin, which comes from dead red blood cells, gives feces the typical brown color.After the meconium, the first stool expelled, a newborn's feces contains only bile, which gives it a yellow-green color. Breast feeding babies expel soft, pale yellowish, and not quite malodorous matter; but once the baby begins to eat, and the body starts expelling bilirubin from dead red blood cells, its matter acquires the familiar brown color.At different times in their life, human beings will expel feces of different colors and textures. A stool that passes rapidly through the intestines will look greenish; lack of bilirubin will make the stool look like clay. Uses of animal feces: Fertilizer The feces of animals, e.g. guano and manure, often are used as fertilizer. Energy Dry animal dung, such as that of camel, bison and cattle, is burned as fuel in many countries.Animals such as the giant panda and zebra possess gut bacteria capable of producing biofuel. The bacterium in question, Brocadia anammoxidans, can be used to synthesize the rocket fuel hydrazine. Uses of animal feces: Coprolites and paleofeces A coprolite is fossilized feces and is classified as a trace fossil. In paleontology they give evidence about the diet of an animal. They were first described by William Buckland in 1829. Prior to this, they were known as "fossil fir cones" and "bezoar stones". They serve a valuable purpose in paleontology because they provide direct evidence of the predation and diet of extinct organisms. Coprolites may range in size from a few millimetres to more than 60 centimetres. Uses of animal feces: Palaeofeces are ancient feces, often found as part of archaeological excavations or surveys. Intact paleofeces of ancient people may be found in caves in arid climates and in other locations with suitable preservation conditions. These are studied to determine the diet and health of the people who produced them through the analysis of seeds, small bones, and parasite eggs found inside. Feces may contain information about the person excreting the material as well as information about the material. They also may be analyzed chemically for more in-depth information on the individual who excreted them, using lipid analysis and ancient DNA analysis. The success rate of usable DNA extraction is relatively high in paleofeces, making it more reliable than skeletal DNA retrieval.The reason this analysis is possible at all is due to the digestive system not being entirely efficient, in the sense that not everything that passes through the digestive system is destroyed. Not all of the surviving material is recognizable, but some of it is. Generally, this material is the best indicator archaeologists can use to determine ancient diets, as no other part of the archaeological record is so direct an indicator.A process that preserves feces in a way that they may be analyzed later is the Maillard reaction. This reaction creates a casing of sugar that preserves the feces from the elements. To extract and analyze the information contained within, researchers generally have to freeze the feces and grind it up into powder for analysis. Uses of animal feces: Other uses Animal dung occasionally is used as a cement to make adobe mudbrick huts, or even in throwing sports, especially with cow and camel dung.Kopi luwak (pronounced [ˈkopi ˈlu.aʔ]), or "civet coffee", is coffee made from coffee berries that have been eaten by and passed through the digestive tract of the Asian palm civet (Paradoxurus hermaphroditus). Giant pandas provide fertilizer for the world's most expensive green tea. In Malaysia, tea is made from the droppings of stick insects fed on guava leaves. Uses of animal feces: In northern Thailand, elephants are used to digest coffee beans in order to make Black Ivory coffee, which is among the world's most expensive coffees. Paper is also made from elephant dung in Thailand. Haathi Chaap is a brand of paper made from elephant dung. Uses of animal feces: Dog feces was used in the tanning process of leather during the Victorian era. Collected dog feces, known as "pure", "puer", or "pewer", were mixed with water to form a substance known as "bate", because proteolytic enzymes in the dog feces helped to relax the fibrous structure of the hide before the final stages of tanning. Dog feces collectors were known as pure finders.Elephants, hippos, koalas and pandas are born with sterile intestines, and require bacteria obtained from eating the feces of their mothers to digest vegetation. Uses of animal feces: In India, cow dung and cow urine are major ingredients of the traditional Hindu drink Panchagavya. Politician Shankarbhai Vegad stated that they can cure cancer.In the Middle East, cow dung is consumed for a variety of reasons, such as curing dysentery, a belief of healing properties or as a food staple. Terminology: "Feces" is the scientific terminology, while the term "stool" is also commonly used in medical contexts. Outside of scientific contexts, these terms are less common, with the most common layman's term being "poop" or "poo". The term "shit" is also in common use, although it is widely considered vulgar or offensive. There are many other terms, see below. Etymology The word faeces is the plural of the Latin word faex meaning "dregs". In most English-language usage, there is no singular form, making the word a plurale tantum; out of various major dictionaries, only one enters variation from plural agreement. Synonyms "Feces" is used more in biology and medicine than in other fields (reflecting science's tradition of classical Latin and Neo-Latin) In hunting and tracking, terms such as dung, scat, spoor, and droppings normally are used to refer to non-human animal feces In husbandry and farming, manure is common. Stool is a common term in reference to human feces. For example, in medicine, to diagnose the presence or absence of a medical condition, a stool sample sometimes is requested for testing purposes. Terminology: The term bowel movement(s) (with each movement a defecation event) is also common in health care.There are many synonyms in informal registers for feces, just like there are for urine. Many are euphemistic, colloquial, or both; some are profane (such as shit), whereas most belong chiefly to child-directed speech (such as poo or the palindromic word poop) or to crude humor (such as crap, dump, load and turd.). Terminology: Feces of animals The feces of animals often have special names (some of them are slang), for example: Non-human animals As bulk material – dung Individually – droppings Cattle Bulk material – cow dung Individual droppings – cow pats, meadow muffins, etc. Terminology: Deer (and formerly other quarry animals) – fewmets Wild carnivores – scat Otter – spraint Birds (individual) – droppings (also include urine as white crystals of uric acid) Seabirds or bats (large accumulations) – guano Herbivorous insects, such as caterpillars and leaf beetles – frass Earthworms, lugworms etc. – worm castings (feces extruded at ground surface) Feces when used as fertilizer (usually mixed with animal bedding and urine) – manure Horses – horse manure, roadapple (before motor vehicles became common, horse droppings were a big part of the rubbish communities needed to clean off roads) Society and culture: Feelings of disgust In all human cultures, feces elicit varying degrees of disgust in adults. Children under two years typically have no disgust response to it, suggesting it is culturally derived. Disgust toward feces appears to be strongest in cultures where flush toilets make olfactory contact with human feces minimal. Disgust is experienced primarily in relation to the sense of taste (either perceived or imagined) and, secondarily to anything that causes a similar feeling by sense of smell, touch, or vision. Society and culture: Social media There is a Pile of Poo emoji represented in Unicode as U+1F4A9 💩 PILE OF POO, called unchi or unchi-kun in Japan. Jokes Poop is the center of toilet humor, and is commonly in interest of young children and teenagers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Latent Dirichlet allocation** Latent Dirichlet allocation: In natural language processing, Latent Dirichlet Allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an example of a Bayesian topic model. In this, observations (e.g., words) are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics. History: In the context of population genetics, LDA was proposed by J. K. Pritchard, M. Stephens and P. Donnelly in 2000.LDA was applied in machine learning by David Blei, Andrew Ng and Michael I. Jordan in 2003. Overview: Evolutionary biology and bio-medicine In evolutionary biology and bio-medicine, the model is used to detect the presence of structured genetic variation in a group of individuals. The model assumes that alleles carried by individuals under study have origin in various extant or past populations. The model and various inference algorithms allow scientists to estimate the allele frequencies in those source populations and the origin of alleles carried by individuals under study. The source populations can be interpreted ex-post in terms of various evolutionary scenarios. In association studies, detecting the presence of genetic structure is considered a necessary preliminary step to avoid confounding. Overview: Clinical psychology, mental health, and social science In clinical psychology research, LDA has been used to identify common themes of self-images experienced by young people in social situations. Other social scientists have used LDA to examine large sets of topical data from discussions on social media (e.g., tweets about prescription drugs). Musicology In the context of computational musicology, LDA has been used to discover tonal structures in different corpora. Overview: Machine learning One application of LDA in machine learning - specifically, topic discovery, a subproblem in natural language processing - is to discover topics in a collection of documents, and then automatically classify any individual document within the collection in terms of how "relevant" it is to each of the discovered topics. A topic is considered to be a set of terms (i.e., individual words or phrases) that, taken together, suggest a shared theme. Overview: For example, in a document collection related to pet animals, the terms dog, spaniel, beagle, golden retriever, puppy, bark, and woof would suggest a DOG_related theme, while the terms cat, siamese, Maine coon, tabby, manx, meow, purr, and kitten would suggest a CAT_related theme. There may be many more topics in the collection - e.g., related to diet, grooming, healthcare, behavior, etc. that we do not discuss for simplicity's sake. (Very common, so called stop words in a language - e.g., "the", "an", "that", "are", "is", etc., - would not discriminate between topics and are usually filtered out by pre-processing before LDA is performed. Pre-processing also converts terms to their "root" lexical forms - e.g., "barks", "barking", and "barked" would be converted to "bark".) If the document collection is sufficiently large, LDA will discover such sets of terms (i.e., topics) based upon the co-occurrence of individual terms, though the task of assigning a meaningful label to an individual topic (i.e., that all the terms are DOG_related) is up to the user, and often requires specialized knowledge (e.g., for collection of technical documents). The LDA approach assumes that: The semantic content of a document is composed by combining one or more terms from one or more topics. Overview: Certain terms are ambiguous, belonging to more than one topic, with different probability. (For example, the term training can apply to both dogs and cats, but are more likely to refer to dogs, which are used as work animals or participate in obedience or skill competitions.) However, in a document, the accompanying presence of specific neighboring terms (which belong to only one topic) will disambiguate their usage. Overview: Most documents will contain only a relatively small number of topics. In the collection, e.g., individual topics will occur with differing frequencies. That is, they have a probability distribution, so that a given document is more likely to contain some topics than others. Within a topic, certain terms will be used much more frequently than others. In other words, the terms within a topic will also have their own probability distribution.When LDA machine learning is employed, both sets of probabilities are computed during the training phase, using Bayesian methods and an Expectation Maximization algorithm. LDA is a generalization of older approach of probabilistic latent semantic analysis (pLSA), The pLSA model is equivalent to LDA under a uniform Dirichlet prior distribution. Overview: pLSA relies on only the first two assumptions above and does not care about the remainder. While both methods are similar in principle and require the user to specify the number of topics to be discovered before the start of training (as with K-means clustering) LDA has the following advantages over pLSA: LDA yields better disambiguation of words and a more precise assignment of documents to topics. Overview: Computing probabilities allows a "generative" process by which a collection of new “synthetic documents” can be generated that would closely reflect the statistical characteristics of the original collection. Unlike LDA, pLSA is vulnerable to overfitting especially when the size of corpus increases. The LDA algorithm is more readily amenable to scaling up for large data sets using the MapReduce approach on a computing cluster. Model: With plate notation, which is often used to represent probabilistic graphical models (PGMs), the dependencies among the many variables can be captured concisely. The boxes are "plates" representing replicates, which are repeated entities. The outer plate represents documents, while the inner plate represents the repeated word positions in a given document; each position is associated with a choice of topic and word. The variable names are defined as follows: M denotes the number of documents N is number of words in a given document (document i has Ni words) α is the parameter of the Dirichlet prior on the per-document topic distributions β is the parameter of the Dirichlet prior on the per-topic word distribution θi is the topic distribution for document i φk is the word distribution for topic k zij is the topic for the j-th word in document i wij is the specific word.The fact that W is grayed out means that words wij are the only observable variables, and the other variables are latent variables. Model: As proposed in the original paper, a sparse Dirichlet prior can be used to model the topic-word distribution, following the intuition that the probability distribution over words in a topic is skewed, so that only a small set of words have high probability. The resulting model is the most widely applied variant of LDA today. The plate notation for this model is shown on the right, where K denotes the number of topics and φ1,…,φK are V -dimensional vectors storing the parameters of the Dirichlet-distributed topic-word distributions ( V is the number of words in the vocabulary). Model: It is helpful to think of the entities represented by θ and φ as matrices created by decomposing the original document-word matrix that represents the corpus of documents being modeled. In this view, θ consists of rows defined by documents and columns defined by topics, while φ consists of rows defined by topics and columns defined by words. Thus, φ1,…,φK refers to a set of rows, or vectors, each of which is a distribution over words, and θ1,…,θM refers to a set of rows, each of which is a distribution over topics. Model: Generative process To actually infer the topics in a corpus, we imagine a generative process whereby the documents are created, so that we may infer, or reverse engineer, it. We imagine the generative process as follows. Documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over all the words. LDA assumes the following generative process for a corpus D consisting of M documents each of length Ni 1. Choose Dir ⁡(α) , where i∈{1,…,M} and Dir(α) is a Dirichlet distribution with a symmetric parameter α which typically is sparse ( α<1 2. Choose Dir ⁡(β) , where k∈{1,…,K} and β typically is sparse 3. For each of the word positions i,j , where i∈{1,…,M} , and j∈{1,…,Ni} (a) Choose a topic Multinomial ⁡(θi). Model: (b) Choose a word Multinomial ⁡(φzi,j). (Note that multinomial distribution here refers to the multinomial with only one trial, which is also known as the categorical distribution.) The lengths Ni are treated as independent of all the other data generating variables ( w and z ). The subscript is often dropped, as in the plate diagrams shown here. Definition A formal description of LDA is as follows: We can then mathematically describe the random variables as follows: Dirichlet Dirichlet Categorical Categorical V⁡(φzdw) Inference: Learning the various distributions (the set of topics, their associated word probabilities, the topic of each word, and the particular topic mixture of each document) is a problem of statistical inference. Monte Carlo simulation The original paper by Pritchard et al. used approximation of the posterior distribution by Monte Carlo simulation. Alternative proposal of inference techniques include Gibbs sampling. Variational Bayes The original ML paper used a variational Bayes approximation of the posterior distribution. Likelihood maximization A direct optimization of the likelihood with a block relaxation algorithm proves to be a fast alternative to MCMC. Unknown number of populations/topics In practice, the optimal number of populations or topics is not known beforehand. It can be estimated by approximation of the posterior distribution with reversible-jump Markov chain Monte Carlo. Inference: Alternative approaches Alternative approaches include expectation propagation.Recent research has been focused on speeding up the inference of latent Dirichlet allocation to support the capture of a massive number of topics in a large number of documents. The update equation of the collapsed Gibbs sampler mentioned in the earlier section has a natural sparsity within it that can be taken advantage of. Intuitively, since each document only contains a subset of topics Kd , and a word also only appears in a subset of topics Kw , the above update equation could be rewritten to take advantage of this sparsity. Inference: p(Zd,n=k)∝αβCk¬n+Vβ+CkdβCk¬n+Vβ+Ckw(α+Ckd)Ck¬n+Vβ In this equation, we have three terms, out of which two are sparse, and the other is small. We call these terms a,b and c respectively. Now, if we normalize each term by summing over all the topics, we get: A=∑k=1KαβCk¬n+Vβ B=∑k=1KCkdβCk¬n+Vβ C=∑k=1KCkw(α+Ckd)Ck¬n+Vβ Here, we can see that B is a summation of the topics that appear in document d , and C is also a sparse summation of the topics that a word w is assigned to across the whole corpus. A on the other hand, is dense but because of the small values of α & β , the value is very small compared to the two other terms. Inference: Now, while sampling a topic, if we sample a random variable uniformly from s∼U(s|∣A+B+C) , we can check which bucket our sample lands in. Since A is small, we are very unlikely to fall into this bucket; however, if we do fall into this bucket, sampling a topic takes O(K) time (same as the original Collapsed Gibbs Sampler). However, if we fall into the other two buckets, we only need to check a subset of topics if we keep a record of the sparse topics. A topic can be sampled from the B bucket in O(Kd) time, and a topic can be sampled from the C bucket in O(Kw) time where Kd and Kw denotes the number of topics assigned to the current document and current word type respectively. Inference: Notice that after sampling each topic, updating these buckets is all basic O(1) arithmetic operations. Aspects of computational details Following is the derivation of the equations for collapsed Gibbs sampling, which means φ s and θ s will be integrated out. For simplicity, in this derivation the documents are all assumed to have the same length N . The derivation is equally valid if the document lengths vary. According to the model, the total probability of the model is: P(W,Z,θ,φ;α,β)=∏i=1KP(φi;β)∏j=1MP(θj;α)∏t=1NP(Zj,t∣θj)P(Wj,t∣φZj,t), where the bold-font variables denote the vector version of the variables. First, φ and θ need to be integrated out. P(Z,W;α,β)=∫θ∫φP(W,Z,θ,φ;α,β)dφdθ=∫φ∏i=1KP(φi;β)∏j=1M∏t=1NP(Wj,t∣φZj,t)dφ∫θ∏j=1MP(θj;α)∏t=1NP(Zj,t∣θj)dθ. All the θ s are independent to each other and the same to all the φ s. So we can treat each θ and each φ separately. We now focus only on the θ part. ∫θ∏j=1MP(θj;α)∏t=1NP(Zj,t∣θj)dθ=∏j=1M∫θjP(θj;α)∏t=1NP(Zj,t∣θj)dθj. We can further focus on only one θ as the following: ∫θjP(θj;α)∏t=1NP(Zj,t∣θj)dθj. Actually, it is the hidden part of the model for the jth document. Now we replace the probabilities in the above equation by the true distribution expression to write out the explicit equation. ∫θjP(θj;α)∏t=1NP(Zj,t∣θj)dθj=∫θjΓ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1Kθj,iαi−1∏t=1NP(Zj,t∣θj)dθj. Inference: Let nj,ri be the number of word tokens in the jth document with the same word symbol (the rth word in the vocabulary) assigned to the ith topic. So, nj,ri is three dimensional. If any of the three dimensions is not limited to a specific value, we use a parenthesized point (⋅) to denote. For example, nj,(⋅)i denotes the number of word tokens in the jth document assigned to the ith topic. Thus, the right most part of the above equation can be rewritten as: ∏t=1NP(Zj,t∣θj)=∏i=1Kθj,inj,(⋅)i. Inference: So the θj integration formula can be changed to: ∫θjΓ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1Kθj,iαi−1∏i=1Kθj,inj,(⋅)idθj=∫θjΓ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1Kθj,inj,(⋅)i+αi−1dθj. The equation inside the integration has the same form as the Dirichlet distribution. According to the Dirichlet distribution, 1. Thus, ∫θjP(θj;α)∏t=1NP(Zj,t∣θj)dθj=∫θjΓ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1Kθj,inj,(⋅)i+αi−1dθj=Γ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1KΓ(nj,(⋅)i+αi)Γ(∑i=1Knj,(⋅)i+αi)∫θjΓ(∑i=1Knj,(⋅)i+αi)∏i=1KΓ(nj,(⋅)i+αi)∏i=1Kθj,inj,(⋅)i+αi−1dθj=Γ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1KΓ(nj,(⋅)i+αi)Γ(∑i=1Knj,(⋅)i+αi). Now we turn our attention to the φ part. Actually, the derivation of the φ part is very similar to the θ part. Here we only list the steps of the derivation: ∫φ∏i=1KP(φi;β)∏j=1M∏t=1NP(Wj,t∣φZj,t)dφ=∏i=1K∫φiP(φi;β)∏j=1M∏t=1NP(Wj,t∣φZj,t)dφi=∏i=1K∫φiΓ(∑r=1Vβr)∏r=1VΓ(βr)∏r=1Vφi,rβr−1∏r=1Vφi,rn(⋅),ridφi=∏i=1K∫φiΓ(∑r=1Vβr)∏r=1VΓ(βr)∏r=1Vφi,rn(⋅),ri+βr−1dφi=∏i=1KΓ(∑r=1Vβr)∏r=1VΓ(βr)∏r=1VΓ(n(⋅),ri+βr)Γ(∑r=1Vn(⋅),ri+βr). For clarity, here we write down the final equation with both ϕ and θ integrated out: P(Z,W;α,β)=∏j=1MΓ(∑i=1Kαi)∏i=1KΓ(αi)∏i=1KΓ(nj,(⋅)i+αi)Γ(∑i=1Knj,(⋅)i+αi)×∏i=1KΓ(∑r=1Vβr)∏r=1VΓ(βr)∏r=1VΓ(n(⋅),ri+βr)Γ(∑r=1Vn(⋅),ri+βr). Inference: The goal of Gibbs Sampling here is to approximate the distribution of P(Z∣W;α,β) . Since P(W;α,β) is invariable for any of Z, Gibbs Sampling equations can be derived from P(Z,W;α,β) directly. The key point is to derive the following conditional probability: P(Z(m,n)∣Z−(m,n),W;α,β)=P(Z(m,n),Z−(m,n),W;α,β)P(Z−(m,n),W;α,β), where Z(m,n) denotes the Z hidden variable of the nth word token in the mth document. And further we assume that the word symbol of it is the vth word in the vocabulary. Z−(m,n) denotes all the Z s but Z(m,n) . Note that Gibbs Sampling needs only to sample a value for Z(m,n) , according to the above probability, we do not need the exact value of P(Zm,n∣Z−(m,n),W;α,β) but the ratios among the probabilities that Z(m,n) can take value. So, the above equation can be simplified as: P(Z(m,n)=v∣Z−(m,n),W;α,β)∝P(Z(m,n)=v,Z−(m,n),W;α,β)=(Γ(∑i=1Kαi)∏i=1KΓ(αi))M∏j≠m∏i=1KΓ(nj,(⋅)i+αi)Γ(∑i=1Knj,(⋅)i+αi)(Γ(∑r=1Vβr)∏r=1VΓ(βr))K∏i=1K∏r≠vΓ(n(⋅),ri+βr)∏i=1KΓ(nm,(⋅)i+αi)Γ(∑i=1Knm,(⋅)i+αi)∏i=1KΓ(n(⋅),vi+βv)Γ(∑r=1Vn(⋅),ri+βr)∝∏i=1KΓ(nm,(⋅)i+αi)Γ(∑i=1Knm,(⋅)i+αi)∏i=1KΓ(n(⋅),vi+βv)Γ(∑r=1Vn(⋅),ri+βr)∝∏i=1KΓ(nm,(⋅)i+αi)∏i=1KΓ(n(⋅),vi+βv)Γ(∑r=1Vn(⋅),ri+βr). Inference: Finally, let nj,ri,−(m,n) be the same meaning as nj,ri but with the Z(m,n) excluded. The above equation can be further simplified leveraging the property of gamma function. We first split the summation and then merge it back to obtain a k -independent summation, which could be dropped: ∝∏i≠kΓ(nm,(⋅)i,−(m,n)+αi)∏i≠kΓ(n(⋅),vi,−(m,n)+βv)Γ(∑r=1Vn(⋅),ri,−(m,n)+βr)Γ(nm,(⋅)k,−(m,n)+αk+1)Γ(n(⋅),vk,−(m,n)+βv+1)Γ(∑r=1Vn(⋅),rk,−(m,n)+βr+1)=∏i≠kΓ(nm,(⋅)i,−(m,n)+αi)∏i≠kΓ(n(⋅),vi,−(m,n)+βv)Γ(∑r=1Vn(⋅),ri,−(m,n)+βr)Γ(nm,(⋅)k,−(m,n)+αk)Γ(n(⋅),vk,−(m,n)+βv)Γ(∑r=1Vn(⋅),rk,−(m,n)+βr)(nm,(⋅)k,−(m,n)+αk)n(⋅),vk,−(m,n)+βv∑r=1Vn(⋅),rk,−(m,n)+βr=∏iΓ(nm,(⋅)i,−(m,n)+αi)∏iΓ(n(⋅),vi,−(m,n)+βv)Γ(∑r=1Vn(⋅),ri,−(m,n)+βr)(nm,(⋅)k,−(m,n)+αk)n(⋅),vk,−(m,n)+βv∑r=1Vn(⋅),rk,−(m,n)+βr∝(nm,(⋅)k,−(m,n)+αk)n(⋅),vk,−(m,n)+βv∑r=1Vn(⋅),rk,−(m,n)+βr Note that the same formula is derived in the article on the Dirichlet-multinomial distribution, as part of a more general discussion of integrating Dirichlet distribution priors out of a Bayesian network. Related problems: Related models Topic modeling is a classic solution to the problem of information retrieval using linked data and semantic web technology. Related models and techniques are, among others, latent semantic indexing, independent component analysis, probabilistic latent semantic indexing, non-negative matrix factorization, and Gamma-Poisson distribution. Related problems: The LDA model is highly modular and can therefore be easily extended. The main field of interest is modeling relations between topics. This is achieved by using another distribution on the simplex instead of the Dirichlet. The Correlated Topic Model follows this approach, inducing a correlation structure between topics by using the logistic normal distribution instead of the Dirichlet. Another extension is the hierarchical LDA (hLDA), where topics are joined together in a hierarchy by using the nested Chinese restaurant process, whose structure is learnt from data. LDA can also be extended to a corpus in which a document includes two types of information (e.g., words and names), as in the LDA-dual model. Related problems: Nonparametric extensions of LDA include the hierarchical Dirichlet process mixture model, which allows the number of topics to be unbounded and learnt from data. Related problems: As noted earlier, pLSA is similar to LDA. The LDA model is essentially the Bayesian version of pLSA model. The Bayesian formulation tends to perform better on small datasets because Bayesian methods can avoid overfitting the data. For very large datasets, the results of the two models tend to converge. One difference is that pLSA uses a variable d to represent a document in the training set. So in pLSA, when presented with a document the model has not seen before, we fix Pr (w∣z) —the probability of words under topics—to be that learned from the training set and use the same EM algorithm to infer Pr (z∣d) —the topic distribution under d . Blei argues that this step is cheating because you are essentially refitting the model to the new data. Related problems: Spatial models In evolutionary biology, it is often natural to assume that the geographic locations of the individuals observed bring some information about their ancestry. This is the rational of various models for geo-referenced genetic data.Variations on LDA have been used to automatically put natural images into categories, such as "bedroom" or "forest", by treating an image as a document, and small patches of the image as words; one of the variations is called spatial latent Dirichlet allocation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rotational correlation time** Rotational correlation time: Rotational correlation time ( τc ) is the average time it takes for a molecule to rotate one radian. In solution, rotational correlation times are in the order of picoseconds. For example, the τc= 1.7 ps for water, and 100 ps for a pyrroline nitroxyl radical in a DMSO-water mixture. Rotational correlation times are employed in the measurement of microviscosity (viscosity at the molecular level) and in protein characterization. Rotational correlation time: Rotational correlation times may be measured by rotational (microwave), dielectric, and nuclear magnetic resonance (NMR) spectroscopy. Rotational correlation times of probe molecules in media have been measured by fluorescence lifetime or for radicals, from the linewidths of electron spin resonances.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CIMOSA** CIMOSA: CIMOSA, standing for "Computer Integrated Manufacturing Open System Architecture", is an enterprise modeling framework, which aims to support the enterprise integration of machines, computers and people. The framework is based on the system life cycle concept, and offers a modelling language, methodology and supporting technology to support these goals.It was developed in the 1990s by the AMICE Consortium, in an EU project. A non-profit organization CIMOSA Association was later established to keep ownership of the CIMOSA specification, to promote it and to support its further evolution. Overview: The original aim of CIMOSA (1992) was "to elaborate an open system architecture for CIM and to define a set of concepts and rules to facilitate the building of future CIM systems". One of the main ideas of CIMOSA is the categorization of manufacturing operations in: Generic functions: generic parts of every enterprise, independent of organisation structure or business area. Overview: Specific (partial and particular) functions: specific for individual enterprises.The development of CIMOSA has ultimately resulted in two key items: Modeling Framework: This framework supports "all phases of the CIM system life-cycle from requirements definition, through design specification, implementation description and execution of the daily enterprise operation". Overview: Integrating Infrastructure: This infrastructure provides "specific information technology services for the execution of the Particular Implementation Model", which has proven to be vendor independent and portable.The framework furthermore offers an "event-driven, process-based modeling approach with the goal to cover essential enterprise aspects in one integrated model. The main aspects are the functional, behavioral, resource, information and organizational aspect".CIMOSA can be applied in process simulation and analysis. Standardized CIMOSA models "can also be used on line in the manufacturing enterprise for scheduling, dispatching, monitoring and providing process information". One of the standards based on CIMOSA is the Generalised Enterprise Reference Architecture and Methodology (GERAM). Building blocks: The main focus of CIMOSA has been to construct: a framework for enterprise modelling, a reference architecture an enterprise modelling language an integrating infrastructure for model enactment supported by a common terminologyA close liaison with European and international standardization organisations was established to stimulate the standardization process for enterprise integration.CIMOSA aims at integrating enterprise operations by means of efficient information exchange within the enterprise. CIMOSA models enterprises using four perspectives: the function view describes the functional structure required to satisfy the objectives of an enterprise and related control structures; the information view describes the information required by each function; the resource view describes the resources and their relations to functional and control structures; and the organization view describes the responsibilities assigned to individuals for functional and control structures. AMICE Consortium: AMICE Consortium was a European organization of major companies concerned with computer-integrated manufacturing (CIM). It was initiated in 1985 and dissolved in 1995, and eventually included users, vendors, consulting companies, and academia. Among the participating companies were IBM, Hewlett-Packard, Digital Equipment Corporation (DEC), Siemens, Fiat, and Daimler-Benz.The AMICE Consortium was initiated as European Strategic Program on Research in Information Technology (ESPRIT) project to bring together stakeholders in the development of CIM for the development of new standards for CIM systems. This led to the development of the CIMOSA, which defined "a comprehensive set of constructs sufficient to describe all aspects of manufacturing systems." It also established the CIMOSA Association. AMICE Consortium: Publications The AMICE Consortium has published several books and papers. A selection: 1989. Open System Architecture for CIM, Research Report of ESPRIT Project 688, Vol. 1, Springer-Verlag. 1991. Open System Architecture, CIMOSA, AD 1.0, Architecture Description, ESPRIT Consortium AMICE, Brussels, Belgium. 1992. ESPRIT Project 5288, Milestone M-2, AD2.0, 2, Architecture description,document RO443/1. Consortium AMICE, Brussels, Belgium. 1993. CIMOSA: open system architecture for CIM, Springer, 1993. AMICE Consortium: CIMOSA Association At the start of the 1990s the CIMOSA Association (COA) was founded as a non-profit organisation by the AMICE Consortium, aiming to promote enterprise engineering and integration (EE&I) based on CIMOSA. It has extended its goals in the new millennium towards "upcoming new enterprise paradigms of extended, virtual and agile enterprises, which cause new requirements on organisational concepts and supporting technologies. Enhanced decision support and operation monitoring and control are some of the needs of today and tomorrow. Capturing knowledge and using it across organisational boundaries will be a major challenge in the new types of businesses. This real-time knowledge needed to support the establishment, deployment and discontinuation of the inter and intra organisational relations".From the start CIMOSA has been an active supporter for national, European and international standardization of Enterprise Integration.In 2010 the CIMOSA Association closed due "loss of membership according to people retirements."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Orders of magnitude (illuminance)** Orders of magnitude (illuminance): As visual perception varies logarithmically, it is helpful to have an appreciation of both illuminance and luminance by orders of magnitude. Illuminance: To help compare different orders of magnitude, the following list describes various source of lux, which is measured in lumens per square metre. Luminance: This section lists examples of luminances, measured in candelas per square metre and grouped by order of magnitude.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**12-O-Tetradecanoylphorbol-13-acetate** 12-O-Tetradecanoylphorbol-13-acetate: 12-O-Tetradecanoylphorbol-13-acetate (TPA), also commonly known as tetradecanoylphorbol acetate, tetradecanoyl phorbol acetate, and phorbol 12-myristate 13-acetate (PMA) is a diester of phorbol. It is a potent tumor promoter often employed in biomedical research to activate the signal transduction enzyme protein kinase C (PKC). The effects of TPA on PKC result from its similarity to one of the natural activators of classic PKC isoforms, diacylglycerol. TPA is a small molecule drug. 12-O-Tetradecanoylphorbol-13-acetate: In ROS biology, superoxide was identified as the major reactive oxygen species induced by TPA/PMA but not by ionomycin in mouse macrophages. Thus, TPA/PMA has been routinely used as an inducer for endogenous superoxide production.TPA is also being studied as a drug in the treatment of hematologic cancerTPA has a specific use in cancer diagnostics as a B-cell specific mitogen in cytogenetic testing. Cells must be divided in a cytogenic test to view the chromosomes. TPA is used to stimulate division of B-cells during cytogenetic diagnosis of B-cell cancers such as chronic lymphocytic leukemia.TPA is also commonly used together with ionomycin to stimulate T-cell activation, proliferation, and cytokine production, and is used in protocols for intracellular staining of these cytokines.TPA induces KSHV reactivation in PEL cell cultures via stimulation of the mitogen-activated protein kinase (MAPK)/extracellular signal-regulated kinase (ERK) pathway. The pathway involves the activation of the early-immediate viral protein RTA that contributes to the activation of the lytic cycle.TPA was first found in the Croton plant, a shrub found in Southeast Asia, exposure to which provokes a poison ivy-like rash. It underwent a phase 1 clinical trial.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RAL colour standard** RAL colour standard: RAL is a colour management system used in Europe that is created and administrated by the German RAL gGmbH (RAL non-profit LLC), which is a subsidiary of the German RAL Institute. In colloquial speech, RAL refers to the RAL Classic system, mainly used for varnish and powder coating, but now plastics as well. Approved RAL products are provided with a hologram to make unauthorised versions difficult to produce. Imitations may show different hue and colour when observed under various light sources. RAL colour space system: RAL Classic In 1927, the German group Reichs-Ausschuß für Lieferbedingungen (Imperial Committee for Delivery and Quality Assurance) invented a collection of forty colours under the name of "RAL 840". Prior to that date, manufacturers and customers had to exchange samples to describe a tint, whereas from then on they would rely on numbers.In the 1930s, the numbers were changed uniformly to four digits and the collection was renamed to "RAL 840 R" (R for revised). RAL colour space system: Around 1940, the RAL colours were changed to the four-digit system, as is customary. Army camouflage colours were always recognized by a "7" or "8" in the first place until 1944. With tints constantly added to the collection, it was revised again in 1961 and changed to "RAL 840-HR", which consists of 210 colours and is in use to this day. In the 1960s, the colours were given supplemental names to avoid confusion in case of transposed digits. At the international furnishing fair imm Cologne, 13-19 January 2020, two new colours were presented in the Classic Collection:   RAL 2017 RAL orange and   RAL 9012 Cleanroom White."RAL 840-HR" covered only matte paint, so the 1980s saw the invention of "RAL 841-GL" for glossy surfaces, limited to 193 colours. A main criterion for colours in the RAL Classic collection is to be of "paramount interest". Therefore, most of the colours in it are used on warning and traffic signs or are dedicated to government agencies and public services (for example:   RAL 1004 - Swiss Postal Service,   RAL 1021 - Austrian Postal Service,   RAL 1032 - German Postal Service). The first digit relates to the shade of the colour: RAL F9 This collection, which follows the naming of RAL Classic, was invented in 1984. It is now made up of ten colours (  RAL 1039-F9 Sand beige,   RAL 1040-F9 Clay beige,   RAL 6031-F9 Bronze green,   RAL 6040-F9 Light olive,   RAL 7050-F9 Camouflage grey,   RAL 8027-F9 Leather brown,   RAL 8031-F9 Sand brown,   RAL 9021-F9 Tar black and   RAL 6031-F9 HR Bronze green semi-matt, used by the Bundeswehr for military camouflage coating. RAL colour space system: RAL Design In 1993 a new colour matching system was introduced, tailored to the needs of architects, designers and advertisers. It started with 1,688 colours and was revised to 1,625 colours and again to 1,825 colours. The colours of RAL Classic and RAL Design do not intersect. RAL colour space system: Contrary to the preceding systems, RAL Design features no names and its numbering follows a scheme based on the CIELAB colour space, specifically cylindrical CIEHLC. Each colour is represented by seven digits, grouped in a triple and two pairs, representing hue (000–360 degrees, angle in the CIELab colour wheel), lightness (same as in L*a*b*) and chroma (relative saturation). The three numeric components of almost all RAL Design colours are multiples of 5, the majority are divisible by 10. RAL colour space system: Conversion from RAL Design number tuple to CIELAB RAL Design cos sin ⁡(hab∘)   RAL 210 50 15 converts to L* = 50, a* = −12.99, b* = −7.5, for instance. RAL Effect RAL Effect comprises 420 solid colours and seventy metallic colours. It is the first collection from RAL to be based on waterborne paint systems. RAL Digital RAL Digital is software that allows designers to navigate the RAL colour space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Look Nevada** Look Nevada: Look's Nevada, released in 1950, was the first recognizably modern alpine ski binding. The Nevada was only the toe portion of the binding, and was used with a conventional cable binding for the heel. An updated version was introduced in 1962 with a new step-in heel binding, the Grand Prix. These basic mechanisms formed the basis for LOOK bindings for over 40 years, changing mainly in name and construction materials. The Nevada toe pattern is almost universal among bindings today. History: Background In the immediate post-World War II era, most downhill ski bindings were of the "Kandahar" pattern of cable binding. This consisted of a metal cup at the front of the ski that kept the boot centred, with a leather strap buckled over the toe to hold it down into the cup. A long metal cable or spring ran around the back of the boot, over a flange protruding from the heel. The strap held the boot forward and kept the toe in the cup and under the strap. The system was designed to keep the toe firmly in place while allowing the heel to rise off the ski. This allowed for a smooth cross-country striding motion. For downhill use, the cable was clipped down near the heel to keep the boot in firmer contact with the ski, and allow some level of lateral control.The major problem with these bindings is they did not release in the case of an accident. In particular, if the forward tip of the ski rotated to the side, the force was transmitted through the length of the ski to the boot, forming a huge moment arm. Even small forces could produce torques able to break the ankle or knee, and spiral fractures of the calf were common. This was not as much of an issue in cross-country where the heel was relatively free to move, but in downhill use when the cable was clipped down this was a serious concern. In the 1950s it was estimated that a skier had a 1% chance of suffering an injury on any given day, and that 10% of skiers would suffer a fracture over a single season.In the immediate post-war era there were a few halting attempts to address this problem. However, most suffered from the problem that the leather ski boots wore down quickly and the mounting point between the binding and boot was thus subject to constant change. Some designs address this by having the user screw metal fixings onto the boot sole to provide a more solid mounting point, but these would only fit a single style of binding. In any event, they required constant adjustment and were often complex. Richard Spademan, inventor of the Spademan binding, would later dismissing them all, stating "Bindings were trash." Beyl's plate French sporting goods manufacturer Jean Beyl made one of the first attempts to solve the twisting fall problem. His design pivoted around a bearing under the foot, in order to ensure that torque did not built up to dangerous levels. Although it did not release the boot from the ski, it did release the force from the boot. To mount it, the boot was fastened to a metal plate, which was in turn cut into the upper surface of the ski in a mortise joint about a centimetre deep. The system was difficult to install, weakened the ski, and also heavy. History: Beyl wanted a sexy name for the company, and took one from a US photo magazine. Look was formed in Nevers, France in 1948. The system saw limited sales, but was in use on the French ski team by 1950. Nevada Beyl was a perfectionist and was unhappy with his plate design. What he wanted was a lightweight solution that was easier to mount, yet retained the ability to absorb lateral forces. History: His next design used a C-shaped piece that fit over the toe of the boot, eliminating the plate. The toe of a contemporary ski boot is essentially hemispherical, so the single clip kept the boot centred and, by riding over the top of it, pressed down onto the ski. Under a pure sideways release scenario, the clip would rotate to allow the boot to exit. However, if the force was also to the front, as in the case of the ski tip catching a tree root or front of a mogul, the pivot would be too close to the line of motion to allow easy rotation. To address this, the entire body of the binding was also pivoted. In this sort of release scenario, the binding as a whole would rotate, and eventually the force would be far enough to the side that the clip would be forced sideways. History: As before, Beyl wanted a US-sounding name for his new binding, and selected "Nevada". The binding was released in 1950, along with a Nevada-branded cable binding of conventional design. The Nevada toe was the first modern ski binding that worked safely with any unmodified boot, eschewing attempts to attach to the sole or use add-on plates or clips. The basic two-pivot design has become universal, and used with only minor modifications to this day. History: Nevada II and Grand Prix Through the 1950s, Look's only real competitor in Europe was Marker, who introduced their Duplex design in 1952. The Duplex improved on the Nevada by using two clips to hold the toe, rather than a single cup. By locating the clips at the corners of the binding, even falls that created straightforward pressure would cause it to open - the clips would be forced outwards and create sideways forces for the main pivot under the binding. In 1953 was replaced in production by the Simplex, which used a single cup like the Nevada, but retained the action of the Duplex and allowed straightforward release. Look and Marker competed for the majority of the European market through the 1950s and into the early 1960s. In 1962 Look dramatically updated their line with the introduction of the Nevada II. The new design used a single pivot point under the binding as before, but replaced the rotating cup with two longer fingers. The action to the original Marker Duplex design. However, the use of the two fingers had two effects, for one it allowed a much wider range of boot shapes to be accommodated, and for another, it allowed the boot to travel much further within the binding before it would release. Other bindings with shorter travel were subject to "pre-release", where a short, sharp force would pop the binding even when the movement would not have been enough to cause damage to the leg. This allowed the Nevada II to be safely used at much lower tension settings, improving the chances of it releasing when needed while still preventing pre-release. History: At the same time, Look introduced their Grand Prix heel binding. This was essentially one half of a Nevada system, turned sideways so it released vertically instead of to the sides. With cable bindings the heel was normally held down by looping the cable over the heel or cupped it in a semi-circular indentation on the back of the heel. To fit these different styles of binding points, and the fact that the boots had no standardized size or shape, the rotating portion of the Grand Prix was mounted on a bracket that lifted it above the heel flange, allowing the user to adjust its height. The actual binding point was a bronze roller sized to be similar to a standard cable, this could clip on top of the heel, or would fit into the indentation cut into the heel of some boots. History: The Grand Prix offered step-in convenience; to put the binding on, the skier inserted their toe under the Nevada II, then stepped down at the heel. The sole of their boot would catch a small plate or rod extending from the bottom of the binding, rotating it until it flipped up to lie vertically behind the skier's leg. During this motion the roller would catch the sole of the boot and lock it into position. Like the Nevada toe, a strong force rotating the boot, this time forward, would cause the binding to release. History: Further improvements As the value of low-friction devices to aid boot release became clear in the late 1960s, Look modified the Nevada II into the Nevada T to take advantage of the teflon pads that were becoming common in the industry. In addition to a pad on top of the ski under the toe, Look also added a second smaller pad where the very front of the boot pressed under the Nevada's toe clips. The pad was shaped to force the boot sideways in the event of a forward fall, further adding to the forces trying to release the toe. History: This basic Grand Prix system was later improved with the addition of a rotating platform under the heel of the boot, known as the "turntable", which stopped the boot from jamming on the heel release's arms when the toe was releasing to the side. These improvements were released as the Look Nevada N17 in the late 1960s. The name now referred to both the toe and heel release as a pair, the separate Grand Prix name was dropped. The N17 was replaced by the similar N57 and N77 from the mid-1970s, which was improved in a number of minor ways, notably the option of a ski brake just behind the toe binding. History: The Nevada patents ran out in 1976, a similar models with long-travel toes quickly appeared from other binding manufacturers, starting with Salomon. These replaced earlier designs, which generally used a single cup-shaped piece that fitted over the entire toe flange (as opposed to the toe, as in the original Nevada). These had the disadvantage of requiring careful adjustment to fit the height of the toe flange, and could be impacted if snow on the bottom of the heel lifted the toe upward. Today the Nevada-style "two finger toe" is universal among modern bindings. History: The N77, in turn, gave rise to the 89 and 99, a series of bindings for different skill levels, collectively referred to as the Look Pivot. The Pivot also introduced a button directly in front of the toe under the binding arms. When the boot slid forward along the ski, the boot would press on the button which released tension in the binding and made it much easier to release. This was a further improvement on the series of design changes improving the forward release capabilities of the toe. The ultimate evolution was the XM version, which also allowed the toe piece to rotated directly up, as is the case in a backward fall. History: Current models Various models of the Pivot were Look's primary offering into the 1990s. When Look was purchased in 1994 by Rossignol, they re-branded the Pivot under their own name. Look-branded versions re-appeared in 2009.Throughout its long history, Look's only other major binding design was the Look Integral, which was aimed at ski rental shops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Herbig Ae/Be star** Herbig Ae/Be star: A Herbig Ae/Be star (HAeBe) is a pre-main-sequence star – a young (<10 Myr) star of spectral types A or B. These stars are still embedded in gas-dust envelopes and are sometimes accompanied by circumstellar disks. Hydrogen and calcium emission lines are observed in their spectra. They are 2-8 Solar mass (M☉) objects, still existing in the star formation (gravitational contraction) stage and approaching the main sequence (i.e. they are not burning hydrogen in their center). Description: In the Hertzsprung–Russell diagram, Herbig Ae/Be stars are located to the right of the main sequence. They are named after the American astronomer George Herbig, who first distinguished them from other stars in 1960. Description: The original Herbig criteria were: Spectral type earlier than F0 (in order to exclude T Tauri stars), Balmer emission lines in the stellar spectrum (in order to be similar to T Tauri stars), Projected location within the boundaries of a dark interstellar cloud (in order to select really young stars near their birthplaces), Illumination of a nearby bright reflection nebula (in order to guarantee physical link with star formation region).There are now several known isolated Herbig Ae/Be stars (i.e. not connected with dark clouds or nebulae). Thus the most reliable criteria now can be: Spectral type earlier than F0, Balmer emission lines in the stellar spectrum, Infrared radiation excess (in comparison with normal stars) due to circumstellar dust (in order to distinguish from classical Be stars, which have infrared excess due to free-free emission).Sometimes Herbig Ae/Be stars show significant brightness variability. They are believed to be due to clumps (protoplanets and planetesimals) in the circumstellar disk. In the lowest brightness stage the radiation from the star becomes bluer and linearly polarized (when the clump obscures direct star light, scattered from disk light relatively increases – it is the same effect as the blue color of our sky). Description: Analogs of Herbig Ae/Be stars in the smaller mass range (<2 M☉) – F, G, K, M spectral type pre-main-sequence stars – are called T Tauri stars. More massive (>8 M☉) stars in pre-main-sequence stage are not observed, because they evolve very quickly: when they become visible (i.e. disperses surrounding circumstellar gas and dust cloud), the hydrogen in the center is already burning and they are main-sequence objects. Planets: Planets around Herbig Ae/Be stars include: HD 95086 b around an A-type star HD 100546 b around a B-type star Sources: Thé P.S., de Winter D., Pérez M.R. (1994) [1] 0 Pérez M.R., Grady C.A. (1997), Observational Overview of Young Intermediate-Mass Objects: Herbig Ae/Be Stars, Space Science Reviews, Vol 82, p. 407-450 Waters L. B. F. M., Waelkens, C. (1998), HERBIG Ae/Be STARS, Annual Review of Astronomy and Astrophysics, Vol. 36, p. 233-266 Herbig Ae/Be stars "Molecular Hydrogen In The Circumstellar Environment Of Herbig Ae/Be Stars" (PDF). mpia-hd.mpg.de. Retrieved 2008-10-16.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithuanian Braille** Lithuanian Braille: Lithuanian Braille is the braille alphabet of the Lithuanian language. Alphabet: The alphabet is as follows: Most of the print letters with accents are derived in Lithuanian braille by adding a dot 6 to the base letter, and those which already have a dot 6 through inversion (cf. Czech Braille, Esperanto Braille). Ū uses the international convention for a second u. Ž is unusual, but perhaps forms a set with s, š, z (cf. Hungarian Braille). Several of these conventions are used in Polish Braille.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Telly (home entertainment server)** Telly (home entertainment server): The Telly home entertainment server is range of computer systems designed to store, manage, and access all forms of digital media in the home. Based on Interact-TV's Linux Media Center software, it provides user managed libraries for music, photos, and all forms of video from recorded television programming to DVDs. Telly (home entertainment server): Expandable hard drive configurations accommodate growing libraries of home entertainment content and provide an alternative to Desktop and Laptop PCs for entertainment content. Networked configurations distribute content shared from all units throughout a network and allow recording at each location. Content on Telly systems appears to both Windows and Mac PCs as local networked volumes and can be accessed over the network. The Telly server web site provides management of and access to music, photos, and video. Telly (home entertainment server): Telly home entertainment servers use a trackball driven user interface and are offered with full high-definition television (HDTV) outputs, built-in digital video recorder (DVR) capabilities and a variety of other accessories. As a home entertainment server, Telly systems differ from traditional media center systems in that it is designed from inception to be configured and operated from a TV-based menu, and as a true server, permits integrated file sharing and secure volume managed expandable storage. Key Features: Telly Servers are available in a range of models from small set-top sized to rack-mount systems. Each system contains a Linux OS and motherboard and one or more hard drives. All models may additionally have tuners and CD/DVD burners for both import and archiving owners content. Key Features: Video Functionality Users can watch or save DVDs in a Telly server video library for full resolution, progressive scan DVD playback Facilities are provided for watching live, delayed or recorded TV, home videos and Internet downloaded video, all of which are stored in the Telly Video Library Video Library contents can be viewed and sorted by Cover Art Navigation or In-Depth Program Information Music DVDs (such as recorded concerts) in the Video Library can also be copied as music tracks into the Music Library - this permits mixing tracks from music DVD with tracks from the Music Library. Key Features: HD Telly systems upsample the native resolution of recorded TV and regular DVDs from 480i to 720p. Key Features: When users stop any playback of any video, a Resume button is provided to allow any networked Telly server to pick up where it left off Telly home entertainment servers support all common non-DRM video formats including AVI, DivX, MPEG-1, MPEG-2 and MPEG-4 Networking allows users to move media between other Telly systems, PCs or portable media players Video files can be imported to Telly from other PCs on the network Internet video sources such as YouTube can be played on Telly servers Music Functionality Jukebox capabilities play single songs or queue any music collection, genre, artist, or album Sort the Jukebox looking at traditional lists or album cover art and build collections for playback. Key Features: Created playlists of selected music can be played at any time or burn them on a CD Users can view music animations during playback and interleave music DVD tracks from the Video Library Users can set the music quality when storing CDs with the built-in CD player including the option of Lossless Audio Encoding (a format that maintains the original CD quality and still creates files that are compressed to save disc space) Support is provided for drag and drop music collections from PCs and Macs to Telly server Import Music folder in most popular non-DRM formats including MP3, AAC, and Flac Telly systems provide a built-in Web Server allowing any computer to access the Music Library to manage tracks or edit album details Telly servers can stream music to any browser Telly systems support internet radio playout Photo Functionality Telly servers scale all photos to fit the connected TV screen and will support full 1080 HDTV resolution for presentation of high quality display of Photo Library contents Telly Photo Library contents can be browsed on the TV screen as thumbnails or as automatic or manually advanced slide shows with predetermined playback or random order playback The built-in web site access provides the ability to create, add to, and edit Photo Library contents Networking Functionality Telly Servers can be networked to grow as users' digital media libraries grow - most systems provide 1000/100/10 Base Ethernet connectivity and can be outfitted with wireless or power-line networking capability. Key Features: Each Telly Server can act as a standalone system to manage and access digital media Each system provides a set of video and audio outputs to connect directly to a TV (either standard definition or high definition) and audio system The analog stereo and digital audio outputs can be connected to a TV's audio input or to an audio receiver with surround sound Built-in network sharing provides for access to or copying digital media between your Windows or Mac and Telly servers using any common web browser and network folder sharing including tools like the My Network Places and Macs using Bonjour All that entertainment content can be made available to other rooms of the house by adding a TellyVizion Playback Unit to your home network over any Ethernet connection between the Telly Server and the TellyVizion. Key Features: Systems can be configured with a central Telly Server with a TellyVizion as discussed above or by adding a second Telly Server providing the following advantages: Two servers double (or more depending on the Telly Server configuration) the amount of storage for digital media. Key Features: Content on each Telly Server can be personalized to exclude shows you don’t want to share with your kids (like The Sopranos) and to exclude shows you don’t care to watch (like The Wiggles), you can keep the content of each server separate by turning off the sharing capability, and content on individual servers can be isolated to prevent accidental deletion As time passes and tastes change, users can always turn sharing back on for access the content from any Telly Server on the home network With a large digital media libraries, users can further extend storage by adding a TellyRAID system, providing any amount of storage, plus the added protection of RAID in case a hard disk drive fails - TellyRAID connects to one master Telly Server, which then provides the digital media stored on the TellyRAID to other TellyVizions or other Telly Servers throughout the house TeleMinder: TellyMinder (Under Development)The TellyMinder is a small desktop computer designed to fit into any entertainment center and display pre-defined messages on a television screen. TellyMinder has a web-based interface that allows the sender (loved ones, medical professionals, etc.) to enter messages (reminders, detailed instructions or video clips) and then schedule these messages to be displayed at a specified time. Messages can include reminders to take a specific medication at a given time, appointment confirmations, or a simple check-in to see how someone is doing. TeleMinder: Once the message is displayed, the recipient (such as the elderly or shut-ins) can confirm the receipt of the message. If the message receipt is not triggered within a specified timeframe, the sender would receive an e-mail alert to notify them of possible problems so the recipient can be contacted directly. TeleMinder: The TellyMinder will also have a web-cam option so sender can check in on the recipient and get visual feedback on the recipient’s status. Future options include the ability to connect monitoring equipment via TellyMinder’s wireless networking so vital information including heart rate, blood pressure and oxygen levels can be monitored in real time. This would also allow TellyMinder to trigger emergency services if the vital information falls to unsafe levels. TeleMinder: TellyMinder leverages significant intellectual property (IP) rights in the following areas: User Interface Library that combines broadcast quality graphics, animation and streaming media Intelligent networked media management system Enhanced television synchronization implementation over a TCP/IP network Store and forward streaming of MPEG video over TCP/IP networks TellyMinder uses a synchronization mechanism based on XML that allows content delivered over the Internet to be coordinated with the display of broadcast television. This dramatically improves the response time to the user, removing the objectionable download delays which can occur when information is requested over the Internet. TeleMinder: Software Architecture The following are the core components of the TellyMinder software platform: tmiGui – This layer of software developed for TellyMinder allows for the design of user interfaces that are television and entertainment centric. Building on the technical team’s extensive knowledge of broadcast graphics design, a sophisticated user interface toolkit has been developed. tmiApplicationManager – This software layer developed for TellyMinder provides the management features required by applications such as where windows should be located and how applications should start and stop. Networked Media Management – TellyMinder has an integrated small and fast database and a newly designed sophisticated, but low maintenance media management system. Linux Operating System – The Linux OS is the current operating system that TellyMinder is built upon, although all of the current application programming interfaces (APIs) in TellyMinder have been designed to be ported to alternate operating systems if the market dynamics dictate this is necessary. History: Corporate Info Interact-TV, Inc. is a Delaware C Corporation, founded in 2000 by a group of television professionals and was originally headquartered in Westminster, Colorado. Interact-TV, Inc. is currently a public company quoted on the OTC Markets under the symbol ITVI, and headquartered in Delaware. Interact-TV was formed to develop software products that centralize both the Entertainment and Information experience. Interact-TV products blend Digital Media, Broadband, and Home Networking, then bring it to the end-user through a Television Interface. Interact-TV products focus on the increasing availability of broadband access to the home, and a pervasive demand for higher-value entertainment. Its initial product, The Telly MC1000 Digital Entertainment Center, began shipping in 2002 as the first fully customizable and expandable digital entertainment system. This was the first integrated system to allow users to access most forms of digital entertainment including broadband Internet, cable and satellite television, digital audio and video entertainment, and digital home networking. History: The Telly home entertainment server product line was continually enhanced and includes a complete line of servers, available through a network of dealers. The Interact-TV Telly product line established a prestigious business‑to‑business customer base including a significant partnership in November 2005 with Turner Broadcasting Systems (TBS) for over 500 Telly product units and special software work for Video‑On‑Demand (VOD) trials and services. History: In 2009 Interact-TV, Inc. acquired Viscount Records, Inc. and Viscount Media Trust from the Medley family, in a deal structured by investment banker, Stan Medley. Shortly thereafter, Interact-TV, Inc. transferred all of its Telly properties operations (except for the Telly Minder system which was and is still under development) to JDV Solutions Inc., a spinoff of Interact-TV Inc. Since late 2009, Telly Systems (except for the Telly Minder) have been sold and serviced by JDV Solutions. History: Since 2010 Interact-TV, Inc. has shifted its focus more to the production side of entertainment with the formation of Pocket Kid Records, and a Web Channel in 2010. Pocket Kid Records has been operating successfully with the band Dead Sara under contract and has had less success with the development of its Web-Channel and Telly Minder properties. Home Entertainment Server Market Overview: Interact-TV introduced their first Telly server in 2001 and was the first commercially available system designed from the ground up as a general purpose home entertainment server. Built on top of a tuned Linux OS, these servers can run efficiently on energy efficient CPUs. Many models are fanless for near-silent operation in home entertainment environments or even bedrooms. Windows Media Center Edition is built on top of a productivity centered operating system which is burdened by inefficient layers of legacy user interface. This approach confronts users with configuration and operational requirements ill suited to an entertainment oriented experience. Media Center Edition systems require high performance computing platforms which are usually power hungry and extensively fan cooled. Home Entertainment Server Market Overview: Apple TV is a solution which utilizes a 'media adapter' approach which places a TV enabled device on the home network to bridge the connection from desktop systems and to provide a conduit for content delivery from their iTunes Store. A similar 'media adapter' approach is also available for Media Center Edition systems from a variety of manufacturers (i.e. Linksys, Logitek, and others). Such adaptations arise primarily from the difficulty of creating small quiet computers for lean-back environments which can run full entertainment software on top of lean-forward productivity oriented operating systems. Operational Requirements: Telly home entertainment servers require an Internet connection for many features. Telly systems network over Ethernet to other computers and work with most cable set top boxes and both Dish and Direct-TV satellite via a simple infrared interface.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Private library** Private library: Private libraries are libraries that are privately owned and are usually intended for the use of a small number of people, or even a single person. As with public libraries, some people use bookplates – stamps, stickers or embossing – to show ownership of the items. Some people sell their private libraries to established institutions such as the Library of Congress, or, as is often the case, bequeath them after death. Much less often, a private library is maintained intact long after the death of the owner. History: The earliest libraries belonged to temples or administration bodies, resembled modern archives, and were usually restricted to nobility, aristocracy, scholars, or theologians. Examples of the earliest known private libraries include one found in Ugarit (dated to around 1200 BC) and the Library of Ashurbanipal in Nineveh (near modern Mosul, Iraq), dating back to the 7th century BC. History: Mesopotamia Mesopotamia was home to a great number of private libraries, many with extensive collections of over 400 tablets. The nucleus of these private libraries were primarily texts which had been transcribed by the proprietors themselves from the time they acquired their education in the art of the scribe. As insignificant as these libraries may seem, they established the basis for the Library of Ashurbanipal collection. History: Egypt While private libraries in ancient Egypt were not common, they did exist to some extent. One of the problems in identifying potential individual libraries is that it is often difficult to distinguish between a personal library and one associated with a temple. However, many personal libraries survived over time, and are perhaps more numerous than traditionally assumed. Several private tombs have exposed copious texts whose content is scholarly in nature. In addition, extensive clusters of papyrus scrolls have been unearthed in association with domiciliary arrangements, confirming that some type of library endured there. The Middle Kingdom Period (2055–1650 BC) offers the best clues to the presence of private libraries in ancient Egypt. History: For example, one sepulcher contained a chest with books on bureaucratic relations, hymns, and incantations. In total, the cache revealed a 20-volume library. A rather large collection from the Thirteenth Dynasty suggests a library belonging to a doctor or necromancer. In addition to general texts on assorted literature, there is a profusion of discourses on medicine and magic. A private library of considerable quantity is attributed to Kenherkhepshef, a scribe. This library embodies nearly 50 manuscripts, accommodating a collection of disparate subjects from correspondence missives to astrological recipes such as incantations and dream interpretations. This particular library spanned many generations, being passed to one family member to the next, which gives the impression of the significance the library had.A manuscript known as the Westcar Papyrus from this same period alludes to an individual whose residence occupies spaces for a private library. The text of the manuscript is a fanciful narrative; however, it proves that ordinary citizens were literate and accumulated books for their own use. One Middle Kingdom tomb, associated with a healer and lector priest, contained over 20 books, one of which was the now-famous Tale of the Eloquent Peasant. Finally, a private library in a New Kingdom tomb at the site of Deir el Medina housed books on medicine as well as on love poetry and wisdom literature. History: Ancient Greece In 600 BC, library and archival collections in ancient Greece flourished. Within the next three centuries the culture of the written word rose to a pinnacle there. Although public libraries available to all citizens were established in some cities, such as Athens, most citizens could not read. However, private book collections owned by the elite and leading citizens were growing, along with the glorious homes and structures used to store them. Private libraries were not only built by the wealthy, but also by professionals who needed information nearby, including doctors and scholars. Notable scholarly figures like Euripides, Herodotus, Thucydides, and even Plato had their own private libraries with large collections. One of the most notable figures in ancient Greece with his own private library was Aristotle. Establishing his personal collection into a library at the Lyceum, Aristotle allowed his students and fellow scholars to use it. After his death, his collection grew to include the work of Theophrastus and student research. The collection was thought to have been scattered after Theophrastus's own death by Neleus. While most of the collection was supposedly brought to Rome and Constantinople, other pieces within the collection were sold to the Library of Alexandria, only to be destroyed later with the library. History: Ancient China There were numerous private libraries in Ancient China. These institutions were called "book collection house" in Chinese, which was widely accepted from Song Dynasty. Under the influence of petty-farmer consciousness, the patriarchal system, lack of books, and other factors, "hiding book" thinking was dominant then. Not all private libraries in ancient China were unavailable to the public. Some owners made their collection open to the public. Mostly to young men who were studying for civil service examinations, these became known as "academy" libraries. History: Ancient Rome The earliest libraries to appear in Rome were of the private type and were most often procured as spoils of war. For example, when the Roman general Aemilius defeated the Macedonian king Perseus in 168 BC, the only plunder he wished to possess was the king's private library. Likewise, in 86 BC, the Roman general Sulla appropriated the library of the infamous Greek bibliophile and kleptobibliophile Apellicon of Teos. Finally, around 73 BC, Lucullus removed and brought back to Rome the private library of King Mithridates VI of the Pontus region. Nearly every house of nobility had a library, and virtually every one was split into two rooms: one for Latin texts and one for Greek texts. Rome may very well have been the birthplace of specialized libraries, with evidence of early medical and legal libraries. In Rome, one can see the beginnings of book preservation. One author proposed that a library is better suited if it meets the rising sun in the east in order to ensure that it does not succumb to bookworms and decomposition. Some examples of Roman-period private libraries include the Villa of the Papyri, the House of Menander, the House of Augustus, and the Domus Aurea.In the 5th century BC, on the island of Cos outside the city of Pergamum, a medical school complex with a library was built in the sanctuary of Asclepius. This is the first medical school known to have existed, and consequently can be credited as the first specialized library. History: Small private libraries called bibliothecae were responsible for advancing the larger public libraries of the Roman world. The design of these libraries was rather a novelty, and became the archetype of later institutions, in particular libraries of imperial estates. The form of private libraries during the late Republic Period and early Empire Period imitated Greek architectural characteristics. The library itself was a repository of diminutive proportions whose purpose was to accommodate books. The books were supported on wooden shelving units or were kept in cupboards situated against walls. Rooms annexed to the library were used primarily as reading rooms. The configuration of these libraries was rectangular and is considered more of niche than a separate room because they were always extensions of other structures.Acquiring books for personal use in order to cultivate oneself was all the rage in the Roman world, partially galvanized by the monarchs who were often prolific writers. Satirist Martial notes that it was quite accepted for the houses of the Roman elite to harbor a library. One reason for the abundance of private libraries is the reinforcement of enlightenment and perpetuating the literary traditions. It was also not uncommon for an individual to assemble a library in order to inveigle an emperor. The writer Lucian of Samosata denounces one such individual who exploits his library to cajole the emperor.The emperor Augustus admired the works of authors and was a prolific author himself. He encouraged the advancement of the library as an institution by harboring a private library of his own. The library was the first to incorporate Greek and Hellenic architectural behaviors. The shape of the library was in the recognizable rectangular style. This library marked the establishment of a binary collection with individual rooms supporting the literatures of Greek and Roman writers respectively.Both the philologist Aulus Gellius and the emperor Marcus Aurelius acknowledge the existence of a private library housed in the Domus Tiberiana. While Aurelius makes a passing reference to a bibliothecarius or palace librarian, Gellius commented on how he and author Sulpicius Apollinaris were engaged in erudite disquisition within the library.The Roman sovereign Hadrian had a fondness for all types of literature; his private sanctuary, the Villa Adriana, had its own library. Like the private library of Augustus, Hadrian’s collection promoted a doublet of Greek and Latin writings. It is difficult to ascertain how many manuscripts the libraries held; however, one assessment speculates that at a single wooden cabinet may have held at least 1,500 scrolls.During the tenure of Nero, an affluent residence was not complete without a library. In fact, libraries were as important as baths.The third century biographer Capitolinus remarks on a private library owned by the Emperor Gordion II. Apparently, the original owner of this library was the father of scholar and polymath Quintus Serenus Sammonicus, whom Gordion was a student of. Upon the death of Sammonicus in 212 AD, the library of some 62,000 manuscripts was entrusted to Gordion. It is not clear what happened to this library, but it has been suggested that it was absorbed by the libraries of the Palatine, Pantheon, or Ulpian. It is also conceivable that it had been interspersed during the upheavals of the third century. History: Renaissance Europe The Renaissance brought with it a renewed interest in conserving the new ideas being put forth by the great thinkers of the day. Kings throughout European countries created libraries, some of which have become the national libraries of today. In addition, wealthy individuals began establishing and developing their own private libraries. History: The National Library of France (French: Bibliothèque Nationale de France) in Paris was started in 1367 as the Royal Library of King Charles V. In Florence, Italy, Cosimo de Medici had a private library which formed the basis of the Laurentian Library. The Vatican Library was also established in the 15th century. Pope Nicholas V helped to renew the Vatican Library by donating hundreds of personal manuscripts to the collection.The creation and expansion of universities prompted the gifting of private libraries to university libraries. One notable donation was by Humphrey, Duke of Gloucester to Oxford University in the early 15th century. History: Colonial North America Private libraries were a characteristic of the first colonists to North America, rather than a peculiarity. For example, 27 libraries were known to have existed in Plymouth Colony alone between 1634 and 1683. Books and the idea of establishing libraries in the new world had always been a strong conviction for the early settlers. William Brewster was one of the many passengers on board the Mayflower on its maiden voyage to America who transported his library, which consisted of nearly 400 volumes. Even as early as 1607, these libraries were flourishing in English-settled Jamestown. The Virginia colony sovereign John Smith described a private library owned by the Reverend Good Master Hunt which was incinerated during a fire that destroyed much of the town. Another analogous finding from 1720 to 1770 in Maryland records that over half of the demographics population had at least the Bible in their libraries; in Virginia, there were close to a thousand private libraries, each with a typical assemblage of 20 books. Distinguished martial administrator Miles Standish owned 50 books, while the governor of Connecticut John Winthrop the Younger carried 1,000 books with him on his voyage to the recently established territories in 1631.George Washington's proclivity towards reading and collecting books in general was also acclaimed. Washington’s personal library was originally housed in his estate at Mount Vernon, Virginia. The library consisted of 1,200 volumes, and a catalog of the titles included in his library was created before his death in 1799. During the mid-nineteenth century, nearly all of the former collection had been purchased by Massachusetts book and manuscript merchant Henry Stevens. Stevens subsequently decided to auction the collection to the British Museum in London; however, interested parties from both Boston and Cambridge, Massachusetts procured the collection where they bequeathed it to its current residence, the Boston Athenæum. Washington’s library encompassed books in many disciplines such as economics, geography, history, and religion. Some of his most beloved volumes were those that pertained to agriculture, since he was an avid farmer. One work that he embraced dearly was a play entitled Cato, a Tragedy written in 1712 by the English playwright Joseph Addison because he felt a connection between the main character Cato and his constant battle with totalitarianism. In addition to the subject areas, the library accommodated diaries, travel, and over 100 federal correspondence letters.Like Washington, Thomas Jefferson was a prolific collector of books and a voracious reader. He actually owned three libraries over the course of his lifetime. The first was maintained from ages 14 to 26 (1757–1770) at his birthplace of Shadwell, Virginia, about five miles west of Monticello. It consisted of 40 that he inherited from his father. Since his father had been a surveyor, the library contained a plethora of maps and topographical monographs, though Jefferson added quite a few volumes to the library from his studies. By 1770, Jefferson had acquired over 300 volumes, worth an estimated 200 pounds.During the period of the American Revolution in the 1780s, Jefferson amassed a collection of books that numbered in the thousands. This collection became his library at his home in Monticello. Over 2,000 books were purchased during the time he spent in France in the late 1780s. Because Jefferson was fluent in French and Latin, the library contained numerous books in these languages, as well as 15 others. The collection was abundant in books on law, philosophy, and history, but it accommodated volumes on many subject areas such as cooking, gardening, and more exotic avocations like beekeeping. Unlike some of his contemporaries, Jefferson traveled very little. As such, the library became his best travel guide. Even though the library went through multiple stages throughout his lifetime, in 1814 it was known that he had the single greatest private library in the United States. When the Library of Congress was consumed by fire, Jefferson persuaded the library to purchase his collection of between nine and ten thousand books in order to compensate for the lost collection. Congress accepted a portion of Jefferson's library (6,487 volumes) in 1815 for the cost of $23,950 (equivalent to $382,939 in 2022). The figure was obtained by calculating the number of books in addition to their dimensions, though Jefferson insisted that he would agree to any price. He remarked, "I do not know that it contains any branch of science which Congress would wish to exclude from this collection". December 1851 brought a second fire to the Library of Congress, which managed to extinguish over 60% of the collection acquired from Jefferson. Jefferson assembled a succeeding library of several thousand volumes. This second library was placed in an auction and purchased in 1829 in order to alleviate his indebtedness.Though Jefferson is recognized most for the breadth of his library, the most astounding characteristic of it is how it was cataloged. While most libraries during this period in American history classified their holdings alphabetically, he chose to catalog his collection by subject. His method of classification was based on a modified version of Lord Bacon's table of science, hierarchy of memory which included history, reason which included philosophy, and imagination which included the fine arts. Jefferson often disregarded his own classification scheme and shelved books according to their size.The most recognizable individuals in colonial North America were proprietors of substantial personal libraries. John Adams, for example, owned more than 3000 volumes, which were entrusted to the Boston Public Library in 1893. He was not only a bibliophile, but an amateur librarian; he maintained his collection fastidiously and even opened his library to the public.Legislator James Logan was a contemporary of Benjamin Franklin, with whom he developed a relationship over a passion for books. According to Logan, there was nothing more important than the acquisition of knowledge. His appetite for enlightenment led to the establishment of a private library of nearly 3000 titles, acknowledged as one of the largest in colonial America. In 1745, Logan converted his private library into a public library, which was the first structure in America to be recognized as a library for the public.Benjamin Franklin, who was instrumental in establishing the first subscription library in North America, was the owner of a private library of considerable proportions. This clandestine miscellanea is not well known, though a contemporary of Franklin, a certain Manasseh Cutler, observed this library firsthand. Cutler noted, "It is a very large chamber and high studded. The walls were covered with book shelves filled with books; besides there are four large alcoves, extending two-thirds of the length of the chamber, filled in the same manner. I presume this is the largest and by far the best, private library in America".: 43  There are no extant catalogs of what treasures were held in Franklin's library; however, his will contained a register which included some 4,726 titles. History: Modern era Private libraries in the hands of individuals have become more numerous with the introduction of paperback books. Some nonprofit organizations maintain special libraries, which are often made available to librarians and researchers. Law firms and hospitals often maintain either a law or medical library for staff use. Additionally, corporations maintain libraries that specialize in collections pertaining to research specific to the areas of concern to that organization. Scientific establishments within academia and industry have libraries to support scientists and researchers. These libraries may not be open to the public. Library (domestic room): The word library also refers to a room in a private house in which books are kept. Generally, it is a relatively large room that is open to all family members and household guests, in contrast to a study, which also often contains a collection of books but is usually a private space intended to be used by one person. Famous private libraries: Queen Elizabeth II's library in Windsor castle Tianyi Pavilion – the oldest private library in Asia; located in Ningbo, Zhejiang, China Library of Sir Thomas Browne disposed of by auction in 1711 Bibliotheca Lindesiana Hakim Syed Zillur Rahman Library located in Aligarh, India Library of the History of Human Imagination, Jay Walker's private library in Ridgefield, CT The Library of Rudolf Steiner George Washington Vanderbilt's library in Biltmore Estate
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Marvel Cinematic Universe tie-in comics** Marvel Cinematic Universe tie-in comics: The Marvel Cinematic Universe tie-in comic books are limited series or one-shot comics published by Marvel Comics that tie into the films and television series of the Marvel Cinematic Universe (MCU). The comics are written and illustrated by a variety of individuals, and each one consists of 1 to 4 issues. They are intended to tell additional stories about existing characters, or to make connections between MCU projects, without necessarily expanding the universe or introducing new concepts or characters. Marvel Cinematic Universe tie-in comics: The first MCU tie-in comics to be published were Iron Man: Fast Friends, The Incredible Hulk: The Fury Files, and Nick Fury: Spies Like Us, all in 2008. They were followed by an adaptation of Iron Man in 2010, along with Iron Man 2: Fist of Iron (2010), Iron Man 2: Public Identity (2010), Iron Man 2: Agents of S.H.I.E.L.D. (2010), Captain America: First Vengeance (2011), Captain America & Thor: Avengers (2011), The Avengers Prelude: Fury's Big Week (2012), The Avengers Initiative (2012), The Avengers Prelude: Black Widow Strikes (2012), and an adaptation of Iron Man 2 (2012). Comic tie-ins for Marvel's television series began in 2014 with Agents of S.H.I.E.L.D.: The Chase, followed by Jessica Jones (2015). Marvel Cinematic Universe tie-in comics: Marvel changed its approach to film tie-in material in 2012, retroactively dividing the tie-in comics into those that exist within the MCU continuity, and those that are merely inspired by the films and television series. Since then, Iron Man 3 Prelude (2013), Thor: The Dark World Prelude (2013), Captain America: The Winter Soldier Infinite Comic (2014), Guardians of the Galaxy Infinite Comic (2014), Guardians of the Galaxy Prelude (2014), Avengers: Age of Ultron Prelude – This Scepter'd Isle (2015), Ant-Man Prelude (2015), Ant-Man – Scott Lang: Small Time (2015), Captain America: Civil War Prelude Infinite Comic (2016), Doctor Strange Prelude (2016), Doctor Strange Prelude Infinite Comic (2016), Black Panther Prelude (2017), and Avengers: Infinity War Prelude (2018) have been released in the former category, along with film adaptations of Thor (2013), Captain America: The First Avenger (2013), The Avengers (2014–15), Iron Man 3 and Captain America: The Winter Soldier (2015–16), Guardians of the Galaxy (2017), Captain America: Civil War (2017), The Incredible Hulk (2017), Thor: The Dark World (2017), Ant-Man (2018), and Avengers: Infinity War (2018–19). Development: In 2008, Marvel released Iron Man: Fast Friends, a comic prequel to Iron Man, for which writer Paul Tobin was given a broad outline and some "temporal staging" so as to allow the comic to tie into the film. Later that year, The Incredible Hulk: Fury Files, which serves as a prequel to The Incredible Hulk, was released, detailing an encounter between the Hulk and Nick Fury, characters who had not yet been seen together in the MCU. Writer Frank Tieri noted that the tie-in comics "provide Marvel with the opportunity to do a lot of different things" that other media do not, including the exploration of non-superhero genres and the reintroduction of older characters.Alejandro Arbona, the Marvel editor tasked with overseeing the 2010 tie-in comics Iron Man 2: Public Identity and Iron Man 2: Agents of S.H.I.E.L.D., explained that Marvel "want to show readers more of that world, that connective tissue between all the movies, and a little bit more of how the characters interact", so the publishing side worked with Brad Winderbaum, Jeremy Latcham, and Will Corona Pilgrim at Marvel Studios to decide which concepts should be carried over from the Marvel Comics Universe to the Marvel Cinematic Universe, what to show in the tie-in comics and what to leave for the films, and how to "make these stories as strong as possible" from their experience making the films.For Marvel's The Avengers in 2012, Marvel's senior vice-president of sales David Gabriel described a "more focused" approach to tie-ins than previously, with the intention to reach fans of 'all walks of life'. This was echoed by Rich Thomas, global editorial director at Disney Publishing, who wanted the Avengers program to be "all things to all people. Just like the film, from the youngest reader...to the Marvel enthusiast." Since then, many of the tie-ins have had the red 'Avengers' stamp on the cover. Pilgrim, the creative director of research and development at Marvel Studios, confirmed that the previously released Public Identity, Agents of S.H.I.E.L.D., and First Vengeance, were all official MCU stories, with the other previously released tie-in comics considered to be inspired by the MCU only. Development: Comic writer Fred Van Lente stated in 2013 that he had proposed a regular comic series set within the MCU to Marvel, but they wished to keep all possibilities open for potential film and television development. He said that this was also the reason why Marvel does not want writers to introduce new elements to the MCU through tie-in comics. In March 2014, Pilgrim confirmed that the MCU Infinite Comics were officially canon. That July, the MCU tie-in comics expanded to television tie-ins with the release of Agents of S.H.I.E.L.D.: The Chase, a comic inspired by the first season of Agents of S.H.I.E.L.D. In November, Marvel Comics Editor-in-Chief Axel Alonso avoided a question of whether an ongoing comic series could be set within the MCU, but did note that Marvel Comics would "always be working on books set in the Cinematic Universe...the collected editions of those comics end up being some of the best sellers of the year".In February 2015, Pilgrim clarified that the canon tie-ins "are considered official MCU canon stories" set in the same universe as the films and television series, whereas the other, "inspired by" tie-ins are "more about having another fun adventure with the Avengers....where we get to show off all the characters from the film in costume and in comic form", but not affect the official MCU continuity. Marvel Cinematic Universe-set titles: Adaptations Several comics adapting the story of the films have been released: Iron Man: I Am Iron Man!, an adaptation of Iron Man; Marvel's Iron Man 2, an adaptation of Iron Man 2; Marvel's Thor, an adaptation of Thor; Marvel's Captain America: The First Avenger, an adaptation of Captain America: The First Avenger; Marvel's The Avengers, an adaptation of The Avengers; Marvel's Captain America: Civil War Prelude, an adaptation of both Iron Man 3 and Captain America: The Winter Soldier; Marvel's Guardians of the Galaxy Vol. 2 Prelude, an adaptation of Guardians of the Galaxy, Spider-Man: Homecoming Prelude, an adaptation of Captain America: Civil War; Marvel's Thor: Ragnarok Prelude, an adaptation of both The Incredible Hulk and Thor: The Dark World, Marvel's Ant-Man and the Wasp Prelude, an adaptation of Ant-Man, Marvel's Avengers: Endgame Prelude, an adaptation of Avengers: Infinity War, and Spider-Man: Far From Home Prelude, an adaptation of Spider-Man: Homecoming.In January 2015, Pilgrim explained the process for adapting films into tie-in comics, noting that scripts and other behind-the-scenes material are referenced in addition to the actual films. Because of this, the adaptations sometimes have new scenes, which Marvel "felt strongly enough" to include as canon, even though they were never filmed. Examples include an interaction between Jasper Sitwell and Nick Fury in Iron Man: I Am Iron Man!, the "Boys Flight Out" sequence from Iron Man 2, where Tony Stark invites James Rhodes to wear the Iron Man Mark II armor, and an additional interaction set during Captain America: Civil War between Peter Parker and his Aunt May in Spider-Man: Homecoming Prelude. Marvel Cinematic Universe-set titles: Iron Man 2: Public Identity (2010) Decades ago, Howard Stark worked with Anton Vanko to build the first arc reactor, but when Howard realized Vanko's greedy goals, he had him arrested and deported before, at the urging of Obadiah Stane, returning to the business of arms dealing that made him so successful in the past. In the present, Stark's son Tony uses the arc reactor technology to power his Iron Man armor, and after revealing this identity to the world, becomes a public hero. U.S. General Thaddeus Ross commissions Stark's business rival Justin Hammer to build a single-pilot vehicle to replace Iron Man, who has been interfering with and causing trouble for the military. When testing the new vehicle, the pilot crash lands in unfriendly territory, under attack from the Congolese army. Tony saves the airman, but, to the chagrin of Ross, refuses to return the Congolese soldiers' fire. Director Nick Fury and Agent Phil Coulson of S.H.I.E.L.D. later review Stark's actions.Public Identity was written by Joe Casey and Justin Theroux, with art by Barry Kitson, and the three issues were published on April 28, May 5, and May 12, 2010, respectively. Theroux, who wrote the screenplay for Iron Man 2, was able to show Casey "a fairly finalized script" before the two began work on the tie-in. In May 2010, Alejandro Arbona, who oversaw the creation of the comic, explained that the story of the tie-ins had to come "organically from the stuff that happened in Iron Man —what would happen to a man who'd just revealed his super hero identity to the world?—and it had to move us toward the stuff we knew was going to be important in Iron Man 2". In September 2010, Casey noted that writing an MCU tie-in comic required working within the boundaries of the movie continuity and Marvel Studios' plan, as well as writing the characters as portrayed in the films (the "movie version of Stark has a specific attitude" that Casey tried to put in the comic, for example).The comic, set after Iron Man but before Iron Man 2 and The Incredible Hulk, was conceived to explain the post-credits scenes of Iron Man and The Incredible Hulk, where Tony Stark meets Nick Fury for the first time but General Ross not for the first time, respectively. Arbona said that "when Tony Stark spoke to General Ross at the end of The Incredible Hulk, you could tell they already knew each other; they spoke about some shared history they'd had together. Well in Iron Man 2: Public Identity you see when they first meet, and what that shared history is." This comic explains that the AI J.A.R.V.I.S. was created in memory of Edwin Jarvis, introduced here as Howard Stark's butler. A younger version of the character appears in Agent Carter, portrayed by James D'Arcy. Marvel Cinematic Universe-set titles: Iron Man 2: Agents of S.H.I.E.L.D. (2010) Nick Fury plants a S.H.I.E.L.D. agent aboard a Ten Rings-controlled vessel in the Gulf of Aden in an effort to get a live account of Iron Man in action, while Tony Stark is keeping an eye on S.H.I.E.L.D. himself. Phil Coulson monitors the first field operation of a S.H.I.E.L.D. recruit who is tasked with taking down a Ten Rings terrorist cell on American soil, which Coulson later reveals was a set up, a common S.H.I.E.L.D. test for new recruits. Natasha Romanoff infiltrates Stark Industries under the alias of Natalie Rushman, using her spy skills to quickly work her way up the company until she meets with Stark personally.Agents of S.H.I.E.L.D. was written by Joe Casey and released on September 1, 2010. It consists of three, eight-page stories, each spotlighting a different character: Nick Fury in "Who Made Who", with art by Tim Green; Phil Coulson in "Just Off the Farm", with art by Felix Ruiz; and Natasha Romanoff in "Proximity", with art by Matt Camp. Casey said of the short stories, "The movie studio is very aware of what they're doing, so they paid close attention...It's not exactly the Marvel Universe I grew up with, [but] it's not like the Ultimate universe either. It's a brand new thing, with its own rules and its own continuity."The end of "Proximity" depicts Romanoff, working undercover in the legal department of Stark Industries, meeting with Tony Stark in his home gym. This is also the scene in Iron Man 2 where the character is introduced, the story therefore acting as a direct prequel / parallel story to the film. Marvel Cinematic Universe-set titles: Captain America: First Vengeance (2011) In 1944, as he attacks a Hydra base in the Nazi-occupied Danish Straits, Steve Rogers recalls parts of his life that got him to this point: his mother's blessing to become a soldier; his first meeting with Bucky Barnes, who protected him from bullies as a boy; the day after Pearl Harbor was bombed, when Bucky and Steve decided to enlist; Bucky training Steve to pass the physical; and the point when Bucky passed the physical but Steve failed. Johann Schmidt watches Rogers fight, and recalls how he got to this point: meeting Adolf Hitler; capturing Arnim Zola to research creating super soldiers; and finding Abraham Erskine and blackmailing him into continuing his super soldier research under the employ of the Nazis. Howard Stark, via radio, assists Rogers, remembering when he was recruited by Colonel Phillips to join the Strategic Scientific Reserve, and when he helped Peggy Carter rescue Erskine from Schmidt. Dum Dum Dugan and the Howling Commandos arrive to assist Rogers, and they remember their formation—thrust together in a Hydra work camp, forming close friendships after a failed escape attempt. After they defeat the Hydra soldiers, Rogers destroys Schmidt's main weapon at the base, an Asgardian artifact.First Vengeance was written by Fred Van Lente, with art for the first half of the comic by Neil Edwards, and by Luke Ross for the rest. The first of eight digital issues was released on February 6, 2011, with the other seven subsequently released on February 16, March 2, March 23, June 8, July 6, July 13, and July 20, respectively. The comic was also published in four issues on May 4, May 18, June 15, and June 29, 2011, respectively. On what the comic covers, Captain America: The First Avenger co-producer Stephen Broussard explained that there were "lots of little side stories" that they found fascinating but didn't fit into the story of the film, so the comic allows those to be told. These side stories include some backstory to the film, some action running parallel to the film, and some hints at "things to come", and are "all sort of jumbled up". Van Lente read the film's script, and had his own comic scripts overlooked by Quesada and Broussard, to keep the comic in line with the film. Some of Van Lente's initial ideas, when he felt that he could do anything because of the comic medium, had to be changed due to not fitting into the realistic world of the MCU, a process Van Lente described as "making comics on a budget".Van Lente stated, "What's neat about the MCU, just like in the comics' universe, is the interconnections between various movies, particularly Iron Man 2 and Thor. You'll start to see those coming out in First Vengeance." On the differences between the MCU and original versions of the characters, Van Lente noted that when Joe Simon and Jack Kirby originally created Captain America, they "didn't have 20/20 hindsight to see how things would fit together with the Marvel characters to follow. With the MCU, we're able to make those connections and heighten all of the characters' weight." Marvel's The Avengers Prelude: Fury's Big Week (2012) Fury's Big Week was written by Christopher Yost and Eric Pearson, and was released digitally in eight issues, on February 5 (tying-into the wider The Avengers marketing campaign, which released a new trailer on that day), February 14, February 21, February 28, March 5, March 12, March 19, and March 26, 2012, respectively. The comic was published in hard copy as four issues on March 7, March 21, April 4, and April 18, 2012, respectively. The comic retells the events of The Incredible Hulk, Iron Man 2, Thor, and Captain America: The First Avenger from the point-of-view of S.H.I.E.L.D., with extra scenes added to weave them all together. Marvel Cinematic Universe-set titles: Marvel's The Avengers: Black Widow Strikes (2012) Natasha Romanoff is taken by surprise in Russia when her target is killed by a "fan" of hers named Sofia. Breaking contact with her S.H.I.E.L.D. superiors, Romanoff begins a competition with Sofia for the mantle of the "Black Widow", discovering that Sofia's employer is selling the parts for a new missile to the Ten Rings. Romanoff tracks Sofia to a missile launch site targeting North Korea, where she stalls long enough for S.H.I.E.L.D. to intervene. Sofia is killed in the ensuing chaos.Written by Fred Van Lente, with art by Neil Edwards, the comic (which is set in Romanoff's native Russia) first appeared in copies of the Maxim Russia magazine, before being released as a traditional, three-issue comic series on May 2, May 16, and June 6, 2012, respectively. The decision was made to give Romanoff her own comic ahead of The Avengers because Marvel felt that she "remains the most enigmatic of the Avengers" despite appearing in previous films. Van Lente explained the relationship between Black Widow Strikes and the films by saying, "This is on the same scale as the Marvel movies. We're working closely with Marvel Studios to have it integrate seamlessly with the Marvel Cinematic Universe. Once this is over, they're welcome to use any of my ideas here for a movie." Van Lente took inspiration "from Joss Whedon's script for Marvel's the Avengers, which I was lucky enough to read. He does a good job making her inclusion on the team perfectly believable."Black Widow Strikes is set between Iron Man 2 and The Avengers, and deals with "some loose ends from Iron Man 2, namely some bootleg Stark technology that Justin Hammer made. The idea of the "Black Widow" being more of a mantle than the codename of a single agent is further explored in Agent Carter, where an early Black Widow training program is introduced, and in Avengers: Age of Ultron, where some of Romanoff's training is shown through flashbacks. Marvel Cinematic Universe-set titles: Marvel's Iron Man 3 Prelude (2013) Tony Stark becomes occupied as he begins construction of Stark Tower in New York, so James Rhodes picks up where Iron Man left off in the fight against the Ten Rings terrorist organization. After 10 months of skirmishes across the world, Rhodey is ambushed in Hong Kong by Ten Rings agents with Hammer Technology, including a nuclear powered tank. Rhodes manages to get the tank out of the city before the insurgents detonate its power source, and returns to America in time to find the aftermath of the Battle of New York. Later, Tony reveals to Rhodes his plans for an Iron Legion, and a Ten Rings operative reports to his master, The Mandarin, informing him that they have full scans of the Stark Technology in Rhodey's suit.Iron Man 3 Prelude was written by Christos Gage, with art by Steve Kurth, with the first issue released on January 2, 2013, and the second on February 6. Marvel Cinematic Universe-set titles: Marvel's Thor: The Dark World Prelude (2013) In the year following the destruction of the Bifrost, the Asgardian means of transportation throughout the cosmos, both the nine realms and nearby planets have fallen into chaos, while astrophysicist Jane Foster has failed in her attempts to re-open a wormhole in New Mexico, an action requiring the Bifrost to be functional. When the presumed dead Loki attacks Earth and steals the Tesseract, Odin uses the secret, dangerous power of Dark Energy to send Thor to intervene. Following the Battle of New York, Thor returns to Asgard with Loki and the Tesseract. Loki is imprisoned in the dungeons to serve a life sentence, while Thor uses the Tesseract's power to repair the Bifrost and allow Asgard to bring order back to the nine realms and beyond.Thor: The Dark World Prelude was written by Craig Kyle and Christopher Yost, with the first issue released on June 5, 2013, with art by Scot Eaton, and the second on July 10 with art by Ron Lim. Marvel Cinematic Universe-set titles: Marvel's Captain America: The Winter Soldier Infinite Comic (2014) Following the Battle of New York, Steve Rogers is working with Natasha Romanoff and Brock Rumlow for S.H.I.E.L.D. When a terrorist group steals the Zodiac virus from S.H.I.E.L.D., the team track them to Willis Tower in Chicago where they take them out and recover the virus.Peter David writes Captain America: The Winter Soldier Infinite Comic, with art by Rock-He Kim. The comic was published on January 28, 2014, and sees David set up key themes for Captain America: The Winter Soldier: "Black Widow and Rumlow...They're okay with doing what they're told. Cap, however, is way more suspicious and wants a clearer idea of what's going on, and is annoyed that S.H.I.E.L.D. isn't big on being forthcoming." On Cap's relationships with new partners Black Widow and Brock Rumlow, "I think he sees her as a valued ally, but [he] tends to be suspicious of the outfit that she works for. He trusts her as someone to have his back in a fight, but I think also believes that if S.H.I.E.L.D. told her to put a knife in his back, she would do so without hesitation, and that can be problematic. Rumlow, meantime, is an eager partner, but Cap doesn't trust him at all. First, there's the suspicion aspect. And second, I think Cap is still gun-shy because the last time he had a partner, Bucky wound up dying—or at least so he believes—and he's not anxious for history to repeat itself."The Zodiac virus featured heavily in the comic was first introduced to the MCU in the Marvel One-Shot Agent Carter, where it is recovered by the titular character for the SSR, the precursor to S.H.I.E.L.D. Marvel Cinematic Universe-set titles: Marvel's Guardians of the Galaxy Infinite Comic – Dangerous Prey (2014) With one Infinity Stone, the Aether, in his possession, the Collector actively searches for the other five, and when he discovers the Orb to be on a desolate planet, he hires the assassin Gamora to retrieve it.Dan Abnett and Andy Lanning, who revived the Guardians of the Galaxy series in 2008, agreed to write the tie-in comic preludes to the film Guardians of the Galaxy, as a favor to director James Gunn. The Infinite Comic was released on April 1, 2014, and features art by Andrea De Vito. Marvel Cinematic Universe-set titles: Marvel's Guardians of the Galaxy Prelude (2014) Issue 1 sees Nebula, while searching for the Orb, remembering growing up with Gamora and their father, Thanos, as well as her recent experiences with Ronan the Accuser. Issue 2 follows Rocket and Groot as bounty hunters in the lead-up to Guardians of the Galaxy.This film prelude was also written by Dan Abnett and Andy Lanning, with Lanning writing the plots based on brainstormed ideas, and Abnett writing the final scripts. With art by Wellinton Alves, the issues were published on April 2 and May 28, 2014, respectively. The stories were intended to flesh out the history of the film characters based on the details in Gunn's script, while Abnett also attempted to bridge the feeling of the film with that of the original comics. Speaking on working with familiar characters, but now the film versions of them, Abnett said, "[It's] Fun, but a little odd. The characters of the Prelude comics had to fit very cleanly to the movie versions so there wasn't quite the same opportunity for madcap invention...I had to make sure the tone fitted precisely." Lanning said, "There's a definite distinction between the Marvel comic universe and the Marvel Cinematic universe...[but] these characters are not so distant to their comic counterparts as to be unrecognizable, they are more like an alternate version, similar to what the Ultimate Universe did." Marvel's Avengers: Age of Ultron Prelude – This Scepter'd Isle (2015) Following the Battle of New York, a disgruntled S.H.I.E.L.D. scientist studying Loki's scepter and its connection to the Tesseract is recruited by Hydra to steal the scepter. With it in the possession of high-ranking Hydra member Baron Wolfgang von Strucker, Hydra begins experimenting on the scepter, eventually using it to unlock special abilities within two volunteers, the twins Pietro and Wanda Maximoff.Released on February 3, 2015, this comic is written by Pilgrim and illustrated by Wellinton Alves. Marvel Cinematic Universe-set titles: Marvel's Ant-Man Prelude (2015) During the Cold War, when Howard Stark demands the Pym Particle Suit, created to shrink its wearer, from S.H.I.E.L.D. scientist Hank Pym to take down radicals in Soviet-occupied Germany who are reverse-engineering Hydra technology, Pym insists on carrying out the mission himself to keep his invention out of the wrong hands. Crossing the Berlin Wall and infiltrating the enemy compound, Pym discovers the plans for Hydra memory suppression technology, and makes his way to a secret lab where the reverse-engineered machine is being tested. Taking out the radicals using the suit's size-changing properties, Pym destroys the technology so that no one can use it. Pym then decides to continue taking missions for S.H.I.E.L.D.Ant-Man Prelude was written by Pilgrim and illustrated by Miguel Sepulveda, with the two issues released on February 4 and March 4, 2015, respectively. The comic was conceived to show Pym using the suit and to explain the quandary that comes with having his technology, to lead up to the film. Though a specific year that the comic takes place in is never specified, Pilgrim explained that it is set in the mid-1980s, when Mikhail Gorbachev served as Soviet leader. Pilgrim described the comic as a "spy action story" with some "thriller elements". Pilgrim wanted the comic to feature the Berlin Wall prominently following a childhood experience with an exhibit recreating part of it.Hydra's interest in and experimentation with memory suppression technology was previously explored in the film Captain America: The Winter Soldier and the television series Agents of S.H.I.E.L.D. and Agent Carter. Marvel Cinematic Universe-set titles: Marvel's Ant-Man – Scott Lang: Small Time (2015) After discovering that his employer is illegally overcharging its customers, and failing to expose the crime thanks to his boss's influence in the media, Scott Lang decides to steal the company's money and return it to the customers. During the robbery, Lang gets carried away and attempts to steal his boss's personal property, including his car. After accidentally crashing the car, Lang is caught and eventually imprisoned.An infinite comic written by Pilgrim, with art by Wellinton Alves and Daniel Govar, Scott Lang: Small Time was released on March 3, 2015. Marvel Cinematic Universe-set titles: Marvel's Jessica Jones (2015) After being beaten by Daredevil, Turk Barrett is visited in the hospital by private investigator Jessica Jones, who has been hired by one of Barrett's baby mamas to get money from him to support their child.Prior to the release of Marvel's Jessica Jones television series, a Jessica Jones one-shot was released digitally on October 7, 2015, written by Brian Michael Bendis with art by Michael Gaydos, the original creators of the character. David Mack also returns from the original comic Alias as the cover artist. Print versions of the comic were exclusively handed out at New York Comic Con 2015. Bendis explained that the one-shot "is in the Marvel TV universe and it celebrates the new show".The comic explores "the connective tissue that will build between the series", by having Daredevil appear in a Jessica Jones story. Marvel Cinematic Universe-set titles: Marvel's Captain America: Civil War Prelude Infinite Comic – Crossroads (2016) Following the defeat of Hydra at the Triskelion, Bucky Barnes / Winter Soldier hunts down and kills his remaining Hydra handlers before going on the run. Months later, Brock Rumlow awakes from a coma and learns of Hydra's defeat and the death of his leader, Alexander Pierce, deciding to head out on his own. After the battle of Sokovia years later, Captain America is balancing his personal search for Barnes with his duties as leader of the new Avengers. The former leads him and the team to Nigeria, where instead of Barnes they find Rumlow, now going by 'Crossbones'.Written by Pilgrim, "Crossroads" was released on February 10, 2016. The comic is told from the perspective of three different characters, Bucky Barnes / Winter Soldier, Brock Rumlow / Crossbones, and Steve Rogers / Captain America, with art for each perspective provided by Lee Ferguson, Goran Sudžuka, and Guillermo Mogorron, respectively. Marvel Cinematic Universe-set titles: Marvel's Doctor Strange Prelude (2016) The Masters of the Mystic Arts – Kaecilius, Wong, Tina Minoru and Daniel Drumm – pursue a woman as she attacks several landmarks around London with a stolen dark sceptre, a powerful artifact. The arrogant Kaecilius attempts to subdue the woman himself, but it takes the combined power of the masters to overcome the sceptre's magic. After reclaiming it, the Masters store the relic in their Sanctum Sanctorum with others like it. Another Master, Mordo, and their teacher, the Ancient One, face a Chinese group who have discovered the powerful Arrow of Apollon and plan to use it to gain power over innocent civilians. They also successfully retrieve the artifact, and it is placed in the Sanctum as well.Written by Pilgrim, with art by Jorge Fornés, the two-issues of the Doctor Strange prelude were released on July 6 and August 24, 2016, respectively. Marvel Cinematic Universe-set titles: Marvel's Doctor Strange Prelude Infinite Comic – The Zealot (2016) This Prelude Infinite Comic, centered on Kaecilius, was released on September 7, 2016. Marvel Cinematic Universe-set titles: Marvel's Black Panther Prelude (2017) This comic tells the story of how T'Challa first became the Black Panther, protector of the isolationist country Wakanda, a story not yet told in the MCU.The comic is set nearly a decade before the film Black Panther, around the end of the first Iron Man film, and reveals that T'Challa has been acting as the Black Panther since then, making him that hero for almost a decade before his film introduction in Captain America: Civil War. Marvel Cinematic Universe-set titles: Marvel's Avengers: Infinity War Prelude (2018) The first comic tells the present whereabouts of all the Avengers after the events of Civil War and Thor: Ragnarok.The second comic gives us an overview of the whereabouts of all the Infinity Stones shown in the MCU thus far, leading to Avengers: Infinity War. Marvel's Captain Marvel Prelude (2018) The one-shot comic, called "The Peacekeepers", follows Nick Fury and Maria Hill between the events of Age of Ultron and Infinity War. Marvel's Black Widow Prelude (2020) This comic retells Natasha Romanoff's history, including events from the films Iron Man 2, The Avengers, Captain America: The Winter Soldier, Avengers: Age of Ultron, and Captain America: Civil War, as told by Councilwoman Hawley and Secretary Thaddeus Ross. Eternals: The 500 Year War (2022) This infinite comic is a series of flashbacks set during Eternals (2021), set in 1520. Recurring characters: This table includes characters who have appeared in multiple MCU tie-in comics, headlining at least one. Reception: Sales The following table lists the known retail sales figures of the collected editions. Reception: Critical response Jesse Schedeen of IGN gave Public Identity a score of 7.2 out of 10, calling it "far more successful than [previous comic tie-ins]", but criticizing the inconsistent artwork and connections to the wider universe that he found to be irrelevant to the comic's plot. Chad Nevett of Comic Book Resources gave a less positive review, assigning 2 out of 5 stars to the comic, and stating that "The plot has potential and the characters' voices are spot-on with the movie versions", however, "the lack of likenesses to the actors is a little off-putting, the art also suffers from inconsistency... It's a solid comic that's overwhelmed by small problems."Nevett, again for Comic Book Resources, gave Agents of S.H.I.E.L.D. a much more positive review, giving it 4 out of 5 stars, and particularly praising the artwork, while also appreciating the consistent voice throughout the three stories from writer Casey.At IGN, Schedeen scored First Vengeance 7 out of 10, calling the issues "enjoyable methods of passing the time until [Captain America: The First Avenger releases]". David Hawkins, writing for What Culture, gave the series 3 stars out of 5, calling it "a marketing ploy that has moments of tremendous merit", and singled out Luke Ross' artwork for the third issue as particularly praise-worthy.Fury's Big Week received praise from CJ Wheeler of Den of Geek, who thought the tone of the series was "a perfect fit for the Marvel Cinematic Universe so far and will get you stoked for what's to come."Matthew Peterson of Major Spoilers gave Black Widow Strikes a score of 2.5 stars out of 5, and the verdict "An average tale with no major missteps" – he thought that "Writer Van Lente did all that he could to make this feel cinematic, driven and squarely in the Marvel movie universe", but was somewhat disappointed in the final result, and criticized the shifting art styles, before surmising that "In many ways, it's the ultimate example of a movie tie-in, designed to please fans of both the graphic and live-action Widow without irritation or unwanted questions."James Hunt of Comic Book Resources found the Iron Man 3 Prelude to be "dull and disappointing", calling the building of stakes around untouchable characters, like War Machine, a mistake, and criticizing the "rushed or phoned in" artwork. Noel Thorne, writing for What Culture, also found the prelude disappointing, calling it a "cheaply put-together cash-grab" and "not even remotely entertaining", and lamented the lack of actual connections with Iron Man 3.For IGN, Schedeen gave Issue #1 of the Thor: The Dark World Prelude a score of 6 out of 10, stating that it "fails to offer a cohesive story or enough compelling material to justify a purchase", and Issue #2 a score of 5 out of 10, calling it "a dull, pointless lead-in to the next Thor movie". He criticized the focus of the series for serving as an explanation for "nitpicky questions" rather than being an actual lead in to the story of the film, and he found the artwork to be "awkward", "flat and dull". "Jay" at Comic Frontline had similar feelings, scoring the comic 3 stars out of 5, stating that, though he liked the issues, and "thought they were solid", he felt they were more like "deleted scenes from the Avengers than a Thor prequel", and they "could have been trimmed down into one issue", rather than being "stretched out to fit two issues just to drain fan's pockets". He was more positive about the artwork, however, calling the work of Scot Eaton and Ron Lim "beautiful", and praising the work of inkers Andrew Hennessy and Rick Magyar for making the different artwork appear consistent across issues.Ian Gowan of ComicSpectrum gave the Captain America: The Winter Soldier Infinite Comic a rating of 3.5 out of 5, calling it "fun but also [a] little light on story" and stating that it increased his excitement for the movie. He found the artwork "serviceable", but stated that the "colored pencil artwork doesn't work...as well in digital comic as it does in a print comic."Doug Zawisza of Comic Book Resources scored the Guardians of the Galaxy Infinite Comic 3.5 stars out of 5, stating that "While it doesn't openly spoil anything from the upcoming movie, it does flaunt the cards being held in the movie's hand quite a bit." He found the artwork to be "better than pedestrian", but felt the Infinite Comic format was not used to its full potential. A columnist for Cosmic Book News gave an almost entirely positive review of the comic, criticizing only its short length, and highlighting the return of writers Dan Abnett and Andy Lanning to the Guardians of the Galaxy characters as a particularly praise-worthy aspect of the issues.Zawisza, again for Comic Book Resources, had similar feelings about the Guardians of the Galaxy Prelude, giving it 3.5 stars out of 5 as well, believing it was in "the same spirit as the volume of "Guardians of the Galaxy" that inspired the film's cast of characters", and leading him to surmise that he wanted more, as long as Abnett and Lanning would return as writers. He had high praise for the artwork as well, calling it "very well drawn, showcasing Wellinton Alves' ability to craft worlds and create distinguishable characters".Schedeen scored Ant-Man Prelude #1 a 6.8 out of 10, indicating an 'okay' comic, stating that "once you ignore the movie connection and just treat this as an Ant-man comic, it doesn't have a great deal to offer", noting that Sepulveda's artwork is inconsistent, and criticizing the lack of similarity between the characters and their live-action counterparts. Zawisza scored the issue 3 stars out of 5, saying that Pilgrim "keeps everyone safe", Sepulveda's artwork is "serviceable", and that "this isn't ground-breaking comics, but it doesn't have to be. It's DVD-style bonus material for the most dedicated fans of the Marvel Cinematic Universe." For the second issue, Schedeen reiterated his previous sentiments. Calling it "unfulfilling", he scored the issue a 5.8 out of 10. Marvel Cinematic Universe-inspired titles: These comics are simply inspired by the films, and are not considered part of the MCU continuity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paver (vehicle)** Paver (vehicle): A paver (road paver finisher, asphalt finisher, road paving machine) is a piece of construction equipment used to lay asphalt concrete or Portland cement concrete on roads, bridges, parking lots and other such places. It lays the material flat and provides minor compaction. This is typically followed by final compaction by a road roller. History: The asphalt paver was developed by the Chuck Burris Co., that originally manufactured material handling systems. In 1929 the Chicago Testing Laboratory approached them to use their material loaders to construct asphalt roads. This did not result in a partnership but Burris did develop a machine based on the concrete pavers of the day that mixed and placed the concrete in a single process. This setup did not prove as effective as desired and the processes were separated and the modern paver was on its way. In 1933 the independent float screed was invented and when combined with the tamper bar provided for uniform material density and thickness. Burris filed for a patent a "Machine for and process of laying roads" on 10 April 1936 and received patent U.S. Patent 2,138,828 on 6 December 1938. The main features of the paver developed by the Chuck Burris Co. have been incorporated into most pavers since, although improvements have been made to control of the machine. Operation: The asphalt is added from a dump truck or a material transfer unit into the paver's hopper. The conveyor then carries the asphalt from the hopper to the auger. The auger places a stockpile of material in front of the screed. The screed takes the stockpile of material and spreads it over the width of the road and provides initial compaction.The paver should provide a smooth uniform surface behind the screed. In order to provide a smooth surface a free floating screed is used. It is towed at the end of a long arm which reduces the base topology effect on the final surface. The height of the screed is controlled by a number of factors including the attack angle of the screed, weight and vibration of the screed, the material head and the towing force.To conform to the elevation changes for the final grade of the road modern pavers use automatic screed controls, which generally control the screed's angle of attack from information gathered from a grade sensor. Additional controls are used to correct the slope, crown or superelevation of the finished pavement.In order to provide a smooth surface the paver should proceed at a constant speed and have a consistent stockpile of material in front of the screed. Increase in material stockpile or paver speed will cause the screed to rise resulting in more asphalt being placed therefore a thicker mat of asphalt and an uneven final surface. Alternatively a decrease in material or a drop in speed will cause the screed to fall and the mat to be thinner.The need for constant speed and material supply is one of the reasons for using a material transfer unit in combination with a paver. A material transfer unit allows for constant material feed to the paver without contact, providing a better end surface. When a dump truck is used to fill the hopper of the paver, it can make contact with the paver or cause it to change speed and affect the screed height. Portland cement concrete paving: Large freeways are often paved with Portland cement concrete and this is done using a slipform paver. Trucks dump loads of readymix concrete in heaps along in front of this machine and then the slipform paver spreads the concrete out and levels it off using a screed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Great disdyakis dodecahedron** Great disdyakis dodecahedron: In geometry, the great disdyakis dodecahedron is a nonconvex isohedral polyhedron. It is the dual of the uniform great truncated cuboctahedron. It has 48 triangular faces. Proportions: The triangles have one angle of arccos 22.062 191 157 54 ∘ , one of arccos 12 106.530 027 150 22 ∘ and one of arccos 12 51.407 781 692 24 ∘ . The dihedral angle equals arccos 71 12 97 123.848 891 579 44 ∘ . Part of each triangle lies within the solid, hence is invisible in solid models. Related polyhedra: The great disdyakis dodecahedron is topologically identical to the convex Catalan solid, disdyakis dodecahedron, which is dual to the truncated cuboctahedron.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Valnemulin** Valnemulin: Valnemulin (trade name Econor or Biotilina) is a pleuromutilin antibiotic used to treat swine dysentery, ileitis, colitis and pneumonia. It is also used for the prevention of intestinal infections of swine. Valnemulin has been observed to induce a rapid reduction of clinical symptoms of Mycoplasma bovis infection, and eliminate M. bovis from the lungs of calves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MicroRNA 548f-4** MicroRNA 548f-4: MicroRNA 548f-4 is a protein that in humans is encoded by the MIR548F4 gene. Function: microRNAs (miRNAs) are short (20-24 nt) non-coding RNAs that are involved in post-transcriptional regulation of gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. miRNAs are transcribed by RNA polymerase II as part of capped and polyadenylated primary transcripts (pri-miRNAs) that can be either protein-coding or non-coding. The primary transcript is cleaved by the Drosha ribonuclease III enzyme to produce an approximately 70-nt stem-loop precursor miRNA (pre-miRNA), which is further cleaved by the cytoplasmic Dicer ribonuclease to generate the mature miRNA and antisense miRNA star (miRNA*) products. The mature miRNA is incorporated into a RNA-induced silencing complex (RISC), which recognizes target mRNAs through imperfect base pairing with the miRNA and most commonly results in translational inhibition or destabilization of the target mRNA. The RefSeq represents the predicted microRNA stem-loop. [provided by RefSeq, Sep 2009].
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hidden states of matter** Hidden states of matter: A hidden state of matter is a state of matter which cannot be reached under ergodic conditions, and is therefore distinct from known thermodynamic phases of the material. Examples exist in condensed matter systems, and are typically reached by the non-ergodic conditions created through laser photo excitation. Short-lived hidden states of matter have also been reported in crystals using lasers. Recently a persistent hidden state was discovered in a crystal of Tantalum(IV) sulfide (TaS2), where the state is stable at low temperatures. A hidden state of matter is not to be confused with hidden order, which exists in equilibrium, but is not immediately apparent or easily observed. Hidden states of matter: Using ultrashort laser pulses impinging on solid state matter, the system may be knocked out of equilibrium so that not only are the individual subsystems out of equilibrium with each other but also internally. Under such conditions, new states of matter may be created which are not otherwise reachable under equilibrium, ergodic system evolution. Such states are usually unstable and decay very rapidly, typically in nanoseconds or less. The difficulty is in distinguishing a genuine hidden state from one which is simply out of thermal equilibrium.Probably the first instance of a photoinduced state is described for the organic molecular compound TTF-CA, which turns from neutral to ionic species as a result of excitation by laser pulses. However, a similar transformation is also possible by the application of pressure, so strictly speaking the photoinduced transition is not to a hidden state under the definition given in the introductory paragraph. A few further examples are given in ref. Hidden states of matter: Photoexcitation has been shown to produce persistent states in vanadates and manganite materials, leading to filamentary paths of a modified charge ordered phase which is sustained by a passing current. Transient superconductivity was also reported in cuprates. A photoexcited transition to an H state: A hypothetical schematic diagram for the transition to an H state by photo excitation is shown in the Figure (After ). An absorbed photon causes an electron from the ground state G to an excited state E (red arrow). State E rapidly relaxes via Franck-Condon relaxation to an intermediate locally reordered state I. Through interactions with others of its kind, this state collectively orders to form a macroscopically ordered metastable state H, further lowering its energy as a result. The new state has a broken symmetry with respect to the G or E state, and may also involve further relaxation compared to the I state. The barrier EB prevents state H from reverting to the ground state G. If the barrier is sufficiently large compared to thermal energy kBT, where kB is the Boltzmann constant, the H state can be stable indefinitely.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Forward premium anomaly** Forward premium anomaly: The forward premium anomaly in currency markets (also referred to as the forward premium puzzle or the Fama puzzle) refers to the well documented empirical finding that the domestic currency appreciates when domestic nominal interest rates exceed foreign interest rates. This is perceived as puzzling in the context of the hypothesis that the expected future change in the exchange rate between two countries is equal to the interest-rate differential between these two countries; this hypothesis suggests that if all currencies are equally risky, investors would demand higher interest rates on currencies expected to fall in value. See: Forward exchange rate# Unbiasedness hypothesis. Thus, appreciation of the domestic currency when domestic interest rates are greater than foreign interest rates is called an anomaly.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dihydrodeoxycorticosterone** Dihydrodeoxycorticosterone: 5α-Dihydrodeoxycorticosterone (abbreviated as DHDOC), also known as 21-hydroxy-5α-pregnan-20-one, is an endogenous progestogen and neurosteroid. It is synthesized from the adrenal hormone deoxycorticosterone (DOC) by the enzyme 5α-reductase type I. DHDOC is an agonist of the progesterone receptor, as well as a positive allosteric modulator of the GABAA receptor, and is known to have anticonvulsant effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metatarsophalangeal joints** Metatarsophalangeal joints: The metatarsophalangeal joints (MTP joints), also informally known as toe knuckles, are the joints between the metatarsal bones of the foot and the proximal bones (proximal phalanges) of the toes. They are condyloid joints, meaning that an elliptical or rounded surface (of the metatarsal bones) comes close to a shallow cavity (of the proximal phalanges). The ligaments are the plantar and two collateral. Movements: The movements permitted in the metatarsophalangeal joints are flexion, extension, abduction, adduction and circumduction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kraft process** Kraft process: The kraft process (also known as kraft pulping or sulfate process) is a process for conversion of wood into wood pulp, which consists of almost pure cellulose fibres, the main component of paper. The kraft process involves treatment of wood chips with a hot mixture of water, sodium hydroxide (NaOH), and sodium sulfide (Na2S), known as white liquor, that breaks the bonds that link lignin, hemicellulose, and cellulose. The technology entails several steps, both mechanical and chemical. It is the dominant method for producing paper. In some situations, the process has been controversial because kraft plants can release odorous products and in some situations produce substantial liquid wastes.The process name is derived from German word Kraft, meaning "strength" in this context, due to the strength of the kraft paper produced using this process. History: A precursor of the kraft process was used during the Napoleonic Wars in England. The kraft process was invented by Carl F. Dahl in 1879 in Danzig, Prussia, Germany. U.S. Patent 296,935 was issued in 1884, and a pulp mill using this technology began in Sweden in 1890. The invention of the recovery boiler by G. H. Tomlinson in the early 1930s was a milestone in the advancement of the kraft process. It enabled the recovery and reuse of the inorganic pulping chemicals such that a kraft mill is a nearly closed-cycle process with respect to inorganic chemicals, apart from those used in the bleaching process. For this reason, in the 1940s, the kraft process superseded the sulfite process as the dominant method for producing wood pulp. The process: Impregnation Common wood chips used in pulp production are 12–25 millimetres (0.47–0.98 in) long and 2–10 millimetres (0.079–0.394 in) thick. The chips normally first enter the presteaming where they are wetted and preheated with steam. Cavities inside fresh wood chips are partly filled with liquid and partly with air. The steam treatment causes the air to expand and about 25% of the air to be expelled from the chips. The next step is to saturate the chips with black and white liquor. Air remaining in chips at the beginning of liquor impregnation is trapped within the chips. The impregnation can be done before or after the chips enter the digester and is normally done below 100 °C (212 °F). The cooking liquors consist of a mixture of white liquor, water in chips, condensed steam and weak black liquor. In the impregnation, cooking liquor penetrates into the capillary structure of the chips and low temperature chemical reactions with the wood begin. A good impregnation is important to get a homogeneous cook and low rejects. About 40–60% of all alkali consumption, in the continuous process, occurs in the impregnation zone. The process: Cooking The wood chips are then cooked in pressurized digesters. Some digesters operate in a batch manner and some in a continuous process. Digesters producing 1,000 tonnes or more of pulp per day are common, with the largest producing more than 3,500 tonnes per day. Typically, delignification requires around two hours at 170 to 176 °C (338 to 349 °F). Under digesting conditions, lignin and hemicellulose degrade to give fragments that are soluble in the strongly basic liquid. The solid pulp (about 50% by weight of the dry wood chips) is collected and washed. At this point the pulp is known as brown stock because of its color. The combined liquids, known as black liquor (because of its color), contain lignin fragments, carbohydrates from the breakdown of hemicellulose, sodium carbonate, sodium sulfate and other inorganic salts. The process: One of the main chemical reactions that underpin the kraft process is the scission of ether bonds by the nucleophilic sulfide (S2−) or bisulfide (HS−) ions. The process: Recovery process The excess black liquor contains about 15% solids and is concentrated in a multiple effect evaporator. After the first step the black liquor has about 20–30% solids. At this concentration the rosin soap rises to the surface and is skimmed off. The collected soap is further processed to tall oil. Removal of the soap improves the evaporation operation of the later effects. The process: The weak black liquor is further evaporated to 65% or even 80% solids ("heavy black liquor") and burned in the recovery boiler to recover the inorganic chemicals for reuse in the pulping process. Higher solids in the concentrated black liquor increases the energy and chemical efficiency of the recovery cycle, but also gives higher viscosity and precipitation of solids (plugging and fouling of equipment). During combustion, sodium sulfate is reduced to sodium sulfide by the organic carbon in the mixture: 1. Na2SO4 + 2 C → Na2S + 2 CO2This reaction is similar to thermochemical sulfate reduction in geochemistry. The process: The molten salts ("smelt") from the recovery boiler are dissolved in a process water known as "weak wash". This process water, also known as "weak white liquor" is composed of all liquors used to wash lime mud and green liquor precipitates. The resulting solution of sodium carbonate and sodium sulfide is known as "green liquor". The green liquor's eponymous green colour arises from the presence of colloidal iron sulfide. This liquid is then mixed with calcium oxide, which becomes calcium hydroxide in solution, to regenerate the white liquor used in the pulping process through an equilibrium reaction (Na2S is shown since it is part of the green liquor, but does not participate in the reaction): 2. Na2CO3 + Ca(OH)2 ←→ 2 NaOH + CaCO3Calcium carbonate precipitates from the white liquor and is recovered and heated in a lime kiln where it is converted to calcium oxide (lime). The process: 3. CaCO3 → CaO + CO2Calcium oxide (lime) is reacted with water to regenerate the calcium hydroxide used in Reaction 2: 4. CaO + H2O → Ca(OH)2The combination of reactions 1 through 4 form a closed cycle with respect to sodium, sulfur and calcium and is the main concept of the so-called recausticizing process where sodium carbonate is reacted to regenerate sodium hydroxide. The process: The recovery boiler also generates high pressure steam which is fed to turbogenerators, reducing the steam pressure for the mill use and generating electricity. A modern kraft pulp mill is more than self-sufficient in its electrical generation and normally will provide a net flow of energy which can be used by an associated paper mill or sold to neighboring industries or communities through to the local electrical grid. Additionally, bark and wood residues are often burned in a separate power boiler to generate steam. The process: Although recovery boilers using G.H. Tomlinson's invention have been in general use since the early 1930s, attempts have been made to find a more efficient process for the recovery of cooking chemicals. Weyerhaeuser has operated a Chemrec first generation black liquor entrained flow gasifier successfully at its New Bern plant in North Carolina, while a second generation plant is run in pilot scale at Smurfit Kappa's plant in Piteå, Sweden. The process: Blowing The finished cooked wood chips are blown to a collection tank called a blow tank that operates at atmospheric pressure. This releases a lot of steam and volatiles. The volatiles are condensed and collected; in the case of northern softwoods this consists mainly of raw turpentine. Screening Screening of the pulp after pulping is a process whereby the pulp is separated from large shives, knots, dirt and other debris. The accept is the pulp. The material separated from the pulp is called reject. The screening section consists of different types of sieves (screens) and centrifugal cleaning. The sieves are normally set up in a multistage cascade operation because considerable amounts of good fibres can go to the reject stream when trying to achieve maximum purity in the accept flow. The fiber containing shives and knots are separated from the rest of the reject and reprocessed either in a refiner or is sent back to the digester. The content of knots is typically 0.5–3.0% of the digester output, while the shives content is about 0.1–1.0%. The process: Washing The brownstock from the blowing goes to the washing stages where the used cooking liquors are separated from the cellulose fibers. Normally a pulp mill has 3-5 washing stages in series. Washing stages are also placed after oxygen delignification and between the bleaching stages as well. Pulp washers use countercurrent flow between the stages such that the pulp moves in the opposite direction to the flow of washing waters. Several processes are involved: thickening / dilution, displacement and diffusion. The dilution factor is the measure of the amount of water used in washing compared with the theoretical amount required to displace the liquor from the thickened pulp. Lower dilution factor reduces energy consumption, while higher dilution factor normally gives cleaner pulp. Thorough washing of the pulp reduces the chemical oxygen demand (COD). The process: Several types of washing equipment are in use: Pressure diffusers Atmospheric diffusers Vacuum drum washers Drum displacers Wash presses Bleaching In a modern mill, brownstock (cellulose fibers containing approximately 5% residual lignin) produced by the pulping is first washed to remove some of the dissolved organic material and then further delignified by a variety of bleaching stages.In the case of a plant designed to produce pulp to make brown sack paper or linerboard for boxes and packaging, the pulp does not always need to be bleached to a high brightness. Bleaching decreases the mass of pulp produced by about 5%, decreases the strength of the fibers and adds to the cost of manufacture. The process: Process chemicals Process chemicals are added to improve the production process: Impregnation aids. Surfactants may be used to improve impregnation of the wood chips with the cooking liquors. Anthraquinone is used as a digester additive. It works as a redox catalyst by oxidizing cellulose and reducing lignin. This protects the cellulose from degradation and makes the lignin more water-soluble. An emulsion breaker can be added in the soap separation to speed up and improve the separation of soap from the used cooking liquors by flocculation. Defoamers remove foam and speed up the production process. Drainage of washing equipment is improved and gives cleaner pulp. Dispersing agents, detackifiers and complexing agents keep the system cleaner and reduce the need for maintenance stops. Fixation agents fix finely dispersed potential deposits to the fibers and thereby transport them out of the process. Comparison with other pulping processes: Pulp produced by the kraft process is stronger than that made by other pulping processes and maintains a high effective sulfur ratio (sulfidity), an important determiner of the strength of the paper. Acidic sulfite processes degrade cellulose more than the kraft process, which leads to weaker fibers. Kraft pulping removes most of the lignin present originally in the wood whereas mechanical pulping processes leave most of the lignin in the fibers. The hydrophobic nature of lignin interferes with the formation of the hydrogen bonds between cellulose (and hemicellulose) in the fibers needed for the strength of paper (strength refers to tensile strength and resistance to tearing). Comparison with other pulping processes: Kraft pulp is darker than other wood pulps, but it can be bleached to make very white pulp. Fully bleached kraft pulp is used to make high-quality paper where strength, whiteness, and resistance to yellowing are important. The kraft process can use a wider range of fiber sources than most other pulping processes. All types of wood, including very resinous types like southern pine, and non-wood species like bamboo and kenaf can be used in the kraft process. Byproducts and emissions: The main byproducts of kraft pulping are crude sulfate turpentine and tall oil soap. The availability of these is strongly dependent on wood species, growth conditions, storage time of logs and chips, and the mill's process. Pines are the most extractive-rich woods. The raw turpentine is volatile and is distilled off the digester, while the raw soap is separated from the spent black liquor by decantation of the soap layer formed on top of the liquor storage tanks. From pines the average yield of turpentine is 5–10 kg/t pulp and of crude tall oil is 30–50 kg/t pulp.Various byproducts containing hydrogen sulfide, methyl mercaptan, dimethyl sulfide, dimethyl disulfide, and other volatile sulfur compounds are the cause of the malodorous air emissions characteristic for pulp mills utilizing the kraft process. The sulfur dioxide emissions of kraft-pulp mills are much lower than those from sulfite mills. In the ambient air outside a typical modern kraft-pulp mill, the sulfur-dioxide odour is perceivable only during disturbance situations, for example when the mill is shut down for a maintenance break, or when an extended power outage occurs. Control of odours is achieved through the collection and burning of these odorous gases in the recovery boiler alongside the black liquor. In modern mills, where well-dried solids are burned in the recovery boiler, hardly any sulfur dioxide leaves the boiler. At high boiler temperatures, the sodium released from the black liquor droplets reacts with sulfur dioxide, thereby effectively scavenging it by forming odourless sodium sulfate crystals. Byproducts and emissions: Pulp mills are almost always located near large bodies of water due to their substantial demand for water. Delignification of chemical pulps releases considerable amounts of organic material into the environment, particularly into rivers or lakes. The wastewater effluent can also be a major source of pollution, containing lignins from the trees, high biological oxygen demand (BOD) and dissolved organic carbon (DOC), along with alcohols, chlorates, heavy metals, and chelating agents. The process effluents can be treated in a biological effluent treatment plant, which can substantially reduce their toxicity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Enhanced Variable Rate Codec** Enhanced Variable Rate Codec: Enhanced Variable Rate CODEC (EVRC) is a speech codec used in CDMA networks. It was developed in 1995 to replace the QCELP vocoder which used more bandwidth on the carrier's network, thus EVRC's primary goal was to offer the mobile carriers more capacity on their networks while not increasing the amount of bandwidth or wireless spectrum needed. EVRC uses RCELP technology. Enhanced Variable Rate Codec: EVRC compresses each 20 milliseconds of 8000 Hz, 16-bit sampled speech input into output frames of one of three different sizes: full rate – 171 bits (8.55 kbit/s), half rate – 80 bits (4.0 kbit/s), eighth rate – 16 bits (0.8 kbit/s). A quarter rate was not included in the original EVRC specification and eventually became part of EVRC-B. EVRC was replaced by SMV. Recently, however, SMV itself has been replaced by the new CDMA2000 4GV codecs. 4GV is the next generation 3GPP2 standards-based EVRC-B codec. 4GV is designed to allow service providers to dynamically prioritize voice capacity on their network as required. Enhanced Variable Rate Codec: EVRC can be also used in 3GPP2 container file format - 3G2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Internet industry jargon** Internet industry jargon: Internet industry jargon is a unique way of speaking used by people working in the internet industry. It shows how those people talk and communicate with each other in their work setting and can vary with different language cultures in different countries. The jargon consists of familiar words found in daily life, but combined and used in the internet industry to create new concepts that describe and express specific ideas. Those jargons are intensively used in their speaking. It is often hard for people outside of this industry to understand what they are talking about although every word seems familiar. Creation and development: In the era of technology, the internet industry develops at a very fast pace. As the industry's dominance has expanded, so have their vocabularies. The startup ecosystem is rife with buzzwords and the language of the industry has also become trendy in the corporate world. Internet industry jargon represents the mindset of people in this industry. It was used to express more specifically and make group identification. Sometimes people in this industry use those jargons to show how professional and high-end their ideas are by using these esoteric words. Examples and definitions: Internet industry jargon itself carries the language habit and cultural background from which it develops. The following list covers some examples of the internet industry jargon, their definitions, and example of usages in English-speaking countries and China. This list is not exhaustive and is subject to change with the renewal of the social environment and usage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baer ring** Baer ring: In abstract algebra and functional analysis, Baer rings, Baer *-rings, Rickart rings, Rickart *-rings, and AW*-algebras are various attempts to give an algebraic analogue of von Neumann algebras, using axioms about annihilators of various sets. Any von Neumann algebra is a Baer *-ring, and much of the theory of projections in von Neumann algebras can be extended to all Baer *-rings, For example, Baer *-rings can be divided into types I, II, and III in the same way as von Neumann algebras. In the literature, left Rickart rings have also been termed left PP-rings. ("Principal implies projective": See definitions below.) Definitions: An idempotent element of a ring is an element e which has the property that e2 = e. The left annihilator of a set X⊆R is {r∈R∣rX={0}} A (left) Rickart ring is a ring satisfying any of the following conditions:the left annihilator of any single element of R is generated (as a left ideal) by an idempotent element. (For unital rings) the left annihilator of any element is a direct summand of R. All principal left ideals (ideals of the form Rx) are projective R modules.A Baer ring has the following definitions:The left annihilator of any subset of R is generated (as a left ideal) by an idempotent element. Definitions: (For unital rings) The left annihilator of any subset of R is a direct summand of R. For unital rings, replacing all occurrences of 'left' with 'right' yields an equivalent definition, that is to say, the definition is left-right symmetric.In operator theory, the definitions are strengthened slightly by requiring the ring R to have an involution ∗:R→R . Since this makes R isomorphic to its opposite ring Rop, the definition of Rickart *-ring is left-right symmetric. Definitions: A projection in a *-ring is an idempotent p that is self-adjoint (p* = p). A Rickart *-ring is a *-ring such that left annihilator of any element is generated (as a left ideal) by a projection. A Baer *-ring is a *-ring such that left annihilator of any subset is generated (as a left ideal) by a projection. An AW*-algebra, introduced by Kaplansky (1951), is a C*-algebra that is also a Baer *-ring. Examples: Since the principal left ideals of a left hereditary ring or left semihereditary ring are projective, it is clear that both types are left Rickart rings. This includes von Neumann regular rings, which are left and right semihereditary. If a von Neumann regular ring R is also right or left self injective, then R is Baer. Any semisimple ring is Baer, since all left and right ideals are summands in R, including the annihilators. Any domain is Baer, since all annihilators are {0} except for the annihilator of 0, which is R, and both {0} and R are summands of R. The ring of bounded linear operators on a Hilbert space are a Baer ring and is also a Baer *-ring with the involution * given by the adjoint. von Neumann algebras are examples of all the different sorts of ring above. Properties: The projections in a Rickart *-ring form a lattice, which is complete if the ring is a Baer *-ring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interactive planning** Interactive planning: Interactive planning is a concept developed by Russell L. Ackoff, an American theorist, early proponent of the field of operations research and recognized as the pioneer in systems thinking. Interactive planning forwards the idea that in order to arrive at a desirable future, one has to create a desirable present and create ways and means to resemble it. One of its unique features is that development should be ideal-oriented. Interactive planning is unlike other types of planning such as reactive planning, inactive planning, and preactive planning. Interactive planning: This is because interactive planning is focused on systems thinking and is "based on the belief that an organization's future depends at least as much on what it does between now and then, as on what is done to it." The organization will then create its future by continuously closing the gap between its current state and its desirable current state. The overall result of a case-based approach conducted by Haftor suggests that IP is a powerful methodology in guiding organizational development.Interactive planning (IP) is a procedure that prescribes how to develop and manage social systems, e.g. organizations, whether they are business or any other kind. Ackoff (1981) expresses the intention of IP in the following terms: "The objective of interactive planning is an effective pursuit of an idealized state. The state is formulated as a design of that system with which the current system's stakeholders would replace it if they were free to do so. Such a system should be technologically feasible and operationally viable, and it should provide the system with an ability to learn and adapt quickly and effectively."Interactive planning promotes democratic control by allowing and facilitating the active participation of various stakeholders in the conceptualization and formulation of programs, projects, strategies and techniques. This empowering shift affords the stakeholders to become committed, engaged and grounded decision-makers. Interactive planning, therefore, according to Zeynep Ocak, "expands participants’ conception of what is possible and reveals that the biggest obstructions to achieving the future most desired are often self-imposed constraints”"Interactive planning also promotes ownership and hence enables the active engagement of stakeholders. It helps map the organization's current standing vis-à-vis its desired future state. As such, interactive planning enables the organization and its members to be reflexive and self-critical in its process of unfolding and becoming. This “interactive and interpretative process” is the essence of “collaborative planning”.This method makes the plan itself an indispensable resource of the organization because of its groundedness and correspondence with the organization's building blocks, namely its policies, human capital, technologies and financial resources, among others. As a living document, it serves as a built-in mechanism to forge dialogue and discussion among the internal and external stakeholders of the organization. Interactive planning seeks to “facilitate exchange of knowledge between stakeholders, consensus building among them, and group-learning processes.”This collaborative approach in planning apprehends problems as interrelated realities and hence are not viewed as mutually exclusive. Considering the strong Systems Thinking influence in interactive planning, problems are viewed in their totality and in the context of their specific details in relation to the social environment where they are situated.Interactive planning has three unique characteristics: Interactive planning works backwards from where an organization wants to be now to where it is now. Interactive planning: Interactive planning is continuous; it does not start and stop. Interactive planning lets the organization's stakeholders to be involved in the planning process.Interactive planning has six phases, divided into two parts: Idealization and Realization. Idealisation: Formulating the mess This is the process of understanding the organization's current state, capabilities, and changes necessary for improvement. To adequately begin formulating the mess, great effort must be taken to understand and learn about the current state of an organization and its environment. This can be called the "state of the organization or situational analysis." Formulating the mess by understanding the state of the organization and by creating a reference projection will enable the organization to "control or influence" its future. This essentially involves appreciating the situation in-depth. This phase is about comprehensively understanding the organization from possible factors that effect its functioning, impinge upon its effectiveness and efficiency, and influences its future direction. According to Ackoff, in the past, situation analysis consisted of identifying the individual problems confronting an organization and classifying these problems as threats and opportunities, prioritizing them, and then focusing the planning in a way that reflected the priorities set on these elements. By the 1970s, planners began to realize that reality does not consist of sets of independent problems, but a system of interacting problems. Therefore, by dealing with any one problem separately, we ignore the fact that the solution to that problem will interact with other problems. So it became necessary to find a way of formulating reality as a system rather than as a set of independent problems. We call such a system a "mess." One of the characteristics of interactive planning is a methodology for formulating messes and treating them through design.There are four elements in formulating the "mess" during the Interactive Planning process: Systems Analysis, Obstruction Analysis, Reference Projection, and Reference Scenario.The State of the Organization (i.e. Systems Analysis) provides a detailed description of what the organization currently does. It can be illustrated by flow charts; identification of rules and customs practiced by the organization; disclosure of internal and external conflicts which affect organization performance; and identifying trends that could affect organization performance if it continued on the same behavioral pattern by failing to adapt to the constant changes in the environment. Idealisation: The progress of an organization may be hindered or disrupted by discrepancies, as well as conflicts present which is the theme of obstruction analysis. This may be conflicts between individual members within the organization, availability of resources, or other existing conflicts. By beginning to analyze the current operation of the system (State of the Organization), elements that obstruct progress become evidently visible in creating a reference projection.A reference projection is a projection of the future of the organization based on two false assumptions: There will be no change in the organization's behavior. Idealisation: The relevant future predicted by the organization is complete and correct.The reference projection should be formed based on the critical success factors of the organization that will lead to its destruction in its projected future. This method of analysis (a reference projection) will produce "how and why the organization will destroy itself." Critical success factors could include an organization's expenses, revenues, return on investments, and safety records. In all instances, the reference projection should sufficiently illustrate the future of the organization if it did not change its behavior. In this sense the reference projection provides a means to identify potential limiting business strategies and suggest ways to avoid serious future failures.The implications rooting from systems analysis, obstruction analysis, and reference projection combined are concluded in reference scenario; for organizational members to comprehend the ramifications projected in the organization and collaborate on building the necessary changes. Idealisation: Ends planning This is the process of defining what the organization desires to be at the present state, and identifying gaps between the desired present and the current reference projections of the organization. This process is achieved by making explicit exactly what is wanted by an organization. In an organization three specific types of ends are ideals, objectives, and goals. This seeks to work towards identifying the end state that needs to be pursued. Idealisation: Ideals like certain limits in mathematical formulas can be approached endlessly but can never be reached. Therefore, the gaps between the current state of the organization and an ideal can be reached. Objectives are a type of ends that can only be attainable in the long run. Idealisation: Goals are a type of ends that can only be attainable in the short run.There are three components to achieve Ends Planning: Idealized design: an ideal design by organizational designers which is viable, feasible, malleable, adaptable, controllable and accessible for improvement Design of Management Systems Organizational DesignThe identification of participants in Ends Planning necessitates that it is composed of individuals who are immersed in the project/program, "diverse in thought and in gender", "capable of thinking outside the box" and appreciative of the role of research and development (R&D) in organizational development. Implementation and Control of Interactive Planning Methodology: From Ocak (2015), Interactive Planning Methodology includes the identification of the Five Key Factors that can significantly influence the success of implementation of the new system redesign. This phase identifies the accountability of the what, when, where and how that will make the new system redesign a realization. Human Factor is one of the important aspects for ensuring success in the process of interactive planning. This recognizes the people involved in the realization of the idealized system redesign. Organizational Factor acknowledges the fact that in making a change in an organization, it would require the support of management in all levels. This means that in the whole process of the interactive planning management should be involved and engaged in participation. Work Factor suggests that in any change in the way people work, it would require an effort of constant interaction with people to keep the whole organization on board. This will not only ensure the success of the system redesign, but it will also give opportunities for enhancement that lead to the reinforcement of the plan. Technology Factor is an important tool to utilize for communicating or aligning the initiatives of the plan. Online platforms trigger collaborations and provide an organized way of disseminating information. Implementation and Control of Interactive Planning Methodology: Commitment Factor differentiates people from the organization that are committed to the cause and those that are just compliant. This factor suggests that in principle, all of the people involved in the interactive planning process can influence the way the plan will be implemented.According to Ocak (2015), this last phase in the Interactive Planning Methodology can be characterized by design controls as it is being implemented and activated. Crucial tasks should be time bound and tagged to a specific working group. These tasks can be specified in terms of the action item to be done and the method of doing it . A regular update of these tasks is necessary to ensure movement and properly address hurdles as it comes up. All these can ensure the efficient activation of the interactive planning process. Evaluation of Interactive Planning Methodology: Eriksson (2007) in Giannaris (2011) made an evaluation of IP's empirical usefulness in the development of a medical department at a pharmaceutical company. This time as well, he devised fifteen steps “in terms of [...] Postulates of Interactive Planning, [which] were used as a guide for the actual use of [Interactive Planning] [..] and also [served] as criteria for its evaluation” (p. 4) Realization: Means Planning This is the process of determining what needs to be done to close the gaps between the desired present and the current reference projections of the organization. The main objective of means planning is to determine how the gaps are to be closed or reduced, which will provide the instructions needed to close or reduce those gaps. Means come in different forms, depending on the complexity of the gaps. Types of means include: acts, courses of action or procedures, practices, processes, projects, programs, and policies. If end planning is about the ‘‘what’’, means planning is about the ‘‘how’’. This phase seeks to open a deliberation on the enablers that are required in order to realize the objectives and on the approach that needs to be adopted in order to successfully deliver on the program.The reference scenario is correlated with the idealized design in order to establish the process in filling the gaps (by resolving, solving or absolving). Predicaments are always in existence and providing a control mechanism as well as a monitoring system can support the process. Establishing a diverse way of addressing the gaps presented by the reference scenario and the idealized design, will entail looking at the way obstruction variables interact and the outcomes produced. Realization: Resource Planning This is the process of identifying what resources are needed, when they will be needed, and what to do in case of shortages/excesses. Resource planning relies heavily on means planning because it provides the backbone from how much resources are inevitably needed. Typically in the resource planning stage of interactive planning five types of resources are focused on: money, capital goods, people, consumables, and data. For each resource type, questions to be addressed are: How much will be required, where, and when? How much will be available at the required time and place? How should each shortage or excess be treated?This phase is regarding anticipating and forecasting on resources—including approaches, tools, information and knowledge—that enable implementation of plans on the ground. Realization: A subsequent process that follows Means Planning is the consequences on the resources (inputs, finances, facilities, staff, services, equipment etc.). Realization: Design of Implementation This is the process of determining "who, what, when, where and how" the plan will be put into action. Implementation is achieved by creating specific instructions based on the means (selected during the means planning process of interactive planning). It consists of implementation decisions and expectations, which should always be monitored and controlled. In addition, the individuals who make the decisions should be available to those responsible for carrying out the decision. Realization: Design of Controls This is the process of deciding how to monitor the implementation of planned decisions and how to evaluate the plan (whether they are effective or not) once implemented. In the control process procedures are created to: identify expectations, monitor decisions, diagnose problems, prescribe corrective action, and providing feedback to facilitate organizational learning and adaptation.Interactive planning can be used to design and implement many different areas in management systems. For example, interactive planning can be used in assessing whether or not the organization's occupational safety and health goals meet their present and future needs and are seen as a vital part of a corporation's on-going success.In addition, interactive planning establishes a model for evaluating, comprehending, and initiating change management within a corporation's safety and health program. This model enables Environmental Health and Safety professionals to create a process safety management framework to perform a gap analysis of current work practices compared with current company means to re-direct resources. There are circumstances that corporate safety management need to re-align the prism to provide clear, concise direction to employees and/or responsible senior executives. Realization: Comparisons with Value-stream mapping (VSM) and Organization performance model (OPM) Interactive planning is similar to the value-stream mapping (VSM) process in the sense that they both map out the current state and lay out the path towards a future or an ideal state. However, the two are different in the sense that VSM focuses more on the material and information flows in a value stream or supply chain, while interactive planning focuses more on organizational dynamics. Moreover, VSM's primary intent is to expose losses and wastes, and although it also involves basic action-planning, it does not include a deliberate design of controls as in interactive planning.Interactive planning is also similar to organization performance model (OPM) in the sense that both are concerned with organizational design, and that both involve an assessment phase and a redesign phase (termed as the idealization and the realization parts in the case of interactive planning). However, OPM focuses more on the assessment and redesign of the cultural elements of an organization, while interactive planning covers the broader organizational system. Idealized Design: Idealized design is the conceptual process in interactive planning. The concept is on a presumed failure of an organization. But the environment, and resources that supported the life of the previous organization still exists, and the decision to design a new system that will replace the last order “right now” subject to "two constraints (technological feasibility and operational viability) and one requirement (the ability to learn and adapt rapidly and effectively)." Organizational planning in the process of replacing a previously failed organization uses an interactive planning concept in designing a new system for the new organization. The author, Russell L. Ackoff, coined this concept during a conference in Bell Laboratories in 1951 held in New Jersey. In the meeting, the Vice President of Bell Labs made a hypothetical statement, “Gentleman, the telephone system of the United States was destroyed last night.” An issue was raised on the research and development system of the company through which the focus of the change is on designing that would improve a whole system, which will then translate into redesigning and developing parts that will fit the whole system. The statement for the new design as “now” rather than any later time is aimed in the present time to eliminate a potential source of error and to focus the organization's direction to the new design to begin at a current time realization of the new organization's system and not on a later time. Also, the concept is better realized when there are virtually no constraints against the new design process; the modification towards the new system becomes more feasible and adaptable to changing internal and external conditions over time. Idealized Design: Technological Feasibility Constraints The available technology and knowledge during the time of the design are the only practical means for the design of a system to be considered. Any imaginative techniques, like mental telepathy or concepts which appear to be “science fiction” at the time of the design, should not be included in the planning activity. Operational Viability Constraints The immediate environment of the design process should adhere during the planning activity. The present includes current laws, regulations, and approval standards, where appropriate because the new design intends to be operationally viable in the existing application and use of an organization. Learning and Adaptation An organization is a learning organization that is capable of adapting to internal and external changes. It has the capability of redesigning itself, allowing its internal and external stakeholders to enhance its performance and proactively anticipate change. Criticism of Interactive Planning: As in any concept, Interactive Planning also has its share of criticisms and most notably from the Marxist-oriented sociological point of view. According to Jackson (2000) as cited in Haftor (2011), there are two key issues raised against IP: (1), it does not challenge power structures present in a managerial structure and (2) in the execution of the IP-process, not all stakeholders have a voice. Criticism of Interactive Planning: In defense of IP, Ackoff: (1) shared his experience wherein he has yet to encounter a situation where power-conflicts could not be addressed; and (2) while he recognized that not all stakeholders are actively involved in the process, he proposed those who do not have full participation may "initially be affiliated to the projects as consultants, and thereafter, as work progresses, be recognized as full members of the project and its decision-making."Interactive communication in interactive planning at times can have subjective formation of the structure for policy making or redevelopment of organizational system. Such subjective process for drawing new format for organizational key decisions may lead to unsubstantial structure and affect the quality of decision being made. When the management personalities are mixed with ordinary employees, intimidation may happen and the voice of the employees or workers who are members of the committee may not amplify much, thus weakening the goal of achieving concrete solutions for present problems in the organization or company (de Jong and Geerlings, 2003, p. 5). Criticism of Interactive Planning: The policy making or redevelopment of an existing policy or system in the organization or company must be based on research by experts on specified topics or department. Interactive planning with the mix of Subject-Matter-Experts and other employees from different departments may result to formulating popular decision, not necessarily thoroughly processed decisions, which are backed up with data, research, and recommendation from experts (de Jong and Geerlings, 2003, p. 8). Criticism of Interactive Planning: The interactive planning may not be suitable for decision making that involves technical subject. If interactive planning be implemented in matters that involve public policy, the research and technical recommendation from the experts must be conducted first, and the leaders may convene about the recommendations stated in the research so as to arrive at a more sound policy (de Jong and Geerlings, 2003, p. 9). Criticism of Interactive Planning: Also, in the interactive planning process, technical experts may lower down their efforts to address issues as they are mixed with other employees, the dialogue may lead to explaining issues rather than focusing on solving complex problems and giving in-depth research on the topic. The time is not maximized to seek more quality solutions, rather it is spent to harmonize the understanding of concepts and arriving at a more popular consensus (de Jong and Geerlings, 2003, p. 9). Criticism of Interactive Planning: The interactive planning process or steps can be revised by considering segregating the participants into their respective field of expertise and have them research on a given topic and have their recommendations presented in the big group. Doing this will let the big group have more in-depth knowledge about specified area of concern and will give more substantial basis for making decision on policies (de Jong and Geerlings, 2003, p. 9). Criticism of Interactive Planning: Also, the quality of content presented during the interactive planning becomes secondary to the degree of communication skill during the discussion. Ability to persuade members of the committee can influence the decision being made by the rest of the committee members. To balance this, presenters of their recommendation must show tangible data that can back up their statements and arguments so as to produce a more informed body of decision makers. On the other hand, lack of communication skills may bring the substantial data and recommendation into silence. With this, quality research and good communication skills are both important in the interactive planning process (de Jong and Geerlings, 2003, p. 9-10). Criticism of Interactive Planning: However, it is the duty of the presenters to consider their audience, their level of understanding about the technical subject, and to use terms that can be understood generally by the members of the committee. The communication aspect must complement the quality of information or knowledge presented, without overruling the persuasion over truthfulness and accuracy of information (de Jong and Geerlings, 2003, p. 10). Criticism of Interactive Planning: One caution in using Interactive Planning approach for redeveloping the system in the organization or for policy making is the probability of producing bias comments from the same employees who have been part of the previous creation of policy in the organization or company. One way to remedy this is to include new members of the organization and have them prepare well for the convention so as to gain confidence in presenting their recommendations (Haftor, 2011). Criticism of Interactive Planning: Another criticism on Interactive Planning is its tendency to duplicate some old policies and may not fully offer a breakthrough developmental plan or public policy. Also, selection of the participants in the convention determines the quality of information to be delivered and the amount of contribution to be given to the Interactive planning project (Haftor, 2011). Criticism of Interactive Planning: An important criticism on Interactive planning also is the uneven commitment level of the team members. Also the degree of involvement of top-level management is a factor in which the decisions made by the team can be fully implemented. To remedy this, the facilitator of the Interactive planning project must secure the full commitment of the members as well as the top-level managers (Leemann, 2002). Basic Groups of Participants in Interactive Planning: Planning is the integral process and sub-processes of time and space that deals with preparation, formulation and fruition of decisions about the future. On the other hand, Dror defined planning as a process, whether in formal and legal matters, wherein decisions will have to be approved and implemented by some other body, or a decision maker. It is a set of plan and consequences that needs to be laid down for a decision maker to make an appropriate and reasonable selection. Milovanovic believed that there is an entire chain of a defined process of decision making, and in interactive planning, there are various participants with various intent of actions. Laurini suggested that there are three essential links between the participants in interactive planning: politicians, planners and citizens. Healey delineated the participant groups who are significant in the conversation about the problems in community: technical experts, media, neighbours, activists, interested citizens, business and industrial team, selected officials, administration of local, regional and national level. However, interested parties are not always participant parties. According to Healey, participants can be divided in to three main clusters: income-based sector, administration and non-government organizations as a form of organized citizens. For them the classifications of the participants are anchored on their roles in decision-making. These are the: proposers, decision makers and the public as the third and last interested party. Basic Groups of Participants in Interactive Planning: Milovanovic stressed that in interactive planning, awareness is the paramount motive for an indepth participation from the different groups. The quality of plan is highly dependent on the quality of life and the quality of space attached to the plan. Citizens as participants of the plan are strongly concern only of those outcome that will affect them personally. The differences in capabilities, resources, and qualifications also influence the quality of the decisions. Because of these differences, Millovanovic suggested to create a division of the participants to warrant the participation of the public and other interest groups. Incorporating Diversity and Inclusion in Interactive Planning: Diversity refers to the traits and characteristics that make people unique while inclusion refers to the behaviors and social norms that ensure people feel welcome. Diversity can range from differences in races, ethnicities, genders, ages, religions, disabilities, and sexual orientations. It also encompasses differences in education, personalities, skill sets, experiences, and knowledge bases.The word interactive from a people perspective is defined as two people influencing each other. In planning, interaction would be more fruitious if different perspectives were brought into the table. Varying perspectives would arise from individual personalities with unique life experiences. Diversity is expressed in welcoming participants of different walks of life. Inclusion is realized and fulfilled when the perspectives opined by the participants are taken into consideration. Incorporating Diversity and Inclusion in Interactive Planning: Although there is beauty in uniformity, there is more in diversity, more so in inclusion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opticin** Opticin: Opticin is a protein that in humans is encoded by the OPTC gene.Opticin belongs to class III of the small leucine-rich repeat protein (SLRP) family. Members of this family are typically associated with the extracellular matrix. Opticin is present in significant quantities in the vitreous of the eye and also localizes to the cornea, iris, ciliary body, optic nerve, choroid, retina, and fetal liver. Opticin may noncovalently bind collagen fibrils and regulate fibril morphology, spacing, and organization. The opticin gene is mapped to a region of chromosome 1 that is associated with the inherited eye diseases age-related macular degeneration (AMD) and posterior column ataxia with retinosa pigmentosa (AXPC1).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apple Inc. v. Samsung Electronics Co.** Apple Inc. v. Samsung Electronics Co.: Apple Inc. v. Samsung Electronics Co., Ltd. was the first of a series of ongoing lawsuits between Apple Inc. and Samsung Electronics regarding the design of smartphones and tablet computers; between them, the companies made more than half of smartphones sold worldwide as of July 2012. In the spring of 2011, Apple began litigating against Samsung in patent infringement suits, while Apple and Motorola Mobility were already engaged in a patent war on several fronts. Apple's multinational litigation over technology patents became known as part of the mobile device "smartphone patent wars": extensive litigation in fierce competition in the global market for consumer mobile communications. By August 2011, Apple and Samsung were litigating 19 ongoing cases in nine countries; by October, the legal disputes expanded to ten countries. By July 2012, the two companies were still embroiled in more than 50 lawsuits around the globe, with billions of dollars in damages claimed between them. While Apple won a ruling in its favor in the U.S., Samsung won rulings in South Korea, Japan, and the UK. On June 4, 2013, Samsung won a limited ban from the U.S. International Trade Commission on sales of certain Apple products after the commission found Apple had violated a Samsung patent, but this was vetoed by U.S. Trade Representative Michael Froman.On December 6, 2016, the United States Supreme Court decided 8–0 to reverse the decision from the first trial that awarded nearly $400 million to Apple and returned the case to Federal Circuit court to define the appropriate legal standard "article of manufacture" because it is not the smartphone itself but could be just the case and screen to which the design patents relate. Origin: On January 4, 2007, 4 days before the iPhone was introduced to the world, Apple filed a suit of 4 design patents covering the basic shape of the iPhone. These were followed up in June of that year with a massive filing of a color design patent covering 193 screenshots of various iPhone graphical user interfaces. It is from these filings along with Apple's utility patents, registered trademarks and trade dress rights, that Apple selected the particular intellectual property to enforce against Samsung. Apple sued its component supplier Samsung, alleging in a 38-page federal complaint on April 15, 2011, in the United States District Court for the Northern District of California that several of Samsung's Android phones and tablets, including the Nexus S, Epic 4G, Galaxy S 4G, and the Samsung Galaxy Tab, infringed on Apple's intellectual property: its patents, trademarks, user interface and style. Apple's complaint included specific federal claims for patent infringement, false designation of origin, unfair competition, and trademark infringement, as well as state-level claims for unfair competition, common law trademark infringement, and unjust enrichment.Apple's evidence submitted to the court included side-by-side image comparisons of iPhone 3GS and i9000 Galaxy S to illustrate the alleged similarities in packaging and icons for apps. However, the images were later found to have been tampered with in order to make the dimensions and features of the two different products seem more similar, and counsel for Samsung accused Apple of submitting misleading evidence to the court.Samsung counter-sued Apple on April 22, 2011, filing federal complaints in courts in Seoul, South Korea; Tokyo, Japan; and Mannheim, Germany, alleging Apple infringed Samsung's patents for mobile-communications technologies. By summer, Samsung also filed suits against Apple in the British High Court of Justice, in the United States District Court for the District of Delaware, and with the United States International Trade Commission (ITC) in Washington D.C., all in June 2011. South Korean courts: In Seoul, Samsung filed its lawsuit in April 2011 in the Central District Court citing five patent infringements. South Korean courts: In late August 2012, a three-judge panel in Seoul Central District Court delivered a split decision, ruling that Apple had infringed upon two Samsung technology patents, while Samsung violated one of Apple's patents. The court awarded small damages to both companies and ordered a temporary sales halt of the infringing products in South Korea; however, none of the banned products were the latest models of either Samsung or Apple.The court ruled that Samsung violated one of Apple's utility patents, over the so-called "bounce-back" effect in IOS and that Apple was in violation of two of Samsung's wireless patents. Apple's claims that Samsung copied the designs of the iPhone and iPad were deemed invalid. The court also ruled that there was "no possibility" that consumers would confuse the smartphones of the two brands, and that Samsung's smartphone icons did not infringe upon Apple's patents. Japanese courts: Samsung's complaint in Japan's Tokyo District Court cited two infringements. Apple has filed other patent suits in Japan against Samsung, most notably one for the "bounce-back" feature. Samsung has also sued Apple, claiming the iPhone and iPad infringe on Samsung patents. Japanese courts: On August 31, 2012, The Tokyo District Court ruled that Samsung's Galaxy smartphones and tablets did not violate an Apple patent on technology that synchronizes music and videos between devices and servers. The three-judge panel in Japan also awarded legal costs to be reimbursed to Samsung. Presiding Judge Tamotsu Shoji said: "The defendant's products do not seem like they used the same technology as the plaintiff's products so we turn down the complaints made by [Apple]." German courts: In August 2011, the Landgericht court in Düsseldorf, Germany granted Apple's request for an EU-wide preliminary injunction barring Samsung from selling its Galaxy Tab 10.1 device on the grounds Samsung's product infringed on two of Apple's interface patents. After Samsung's allegations of evidence tampering were heard, the court rescinded the EU-wide injunction and granted Apple a lesser injunction that only applied to the German market. Samsung also pulled the Galaxy Tab 7.7 from Berlin's IFA electronics fair due to the ruling preventing marketing of the device, before the court was set to make its ruling in September 2011. According to an estimate by Strategy Analytics, the impact on Samsung, in Germany, could have cost up to half a million unit sales. In the same time period and in similar cases of related legal strategy, Apple filed contemporaneous suits against Motorola with regard to the Xoom and against German consumer electronics reseller JAY-tech in the same German court, both for design infringement claims seeking preliminary injunctions.On September 9, 2011, the German court ruled in favor of Apple, with a sales ban on the Galaxy Tab 10.1. The court found that Samsung had infringed Apple's patents. Presiding judge Johanna Brueckner-Hofmann said there was a "clear impression of similarity". Samsung would appeal the decision.In March 2012, the Mannheim state court judges dismissed both the Apple and Samsung cases involving ownership of the "slide-to-unlock" feature used on their respective smartphones. The New York Times reported the German courts were at the center of patent fights among technology company rivals. In July 2012, the Munich Higher Regional Court Oberlandesgericht München affirmed the lower Regional Court's denial of Apple's motion for a preliminary injunction on Apple's allegation that Samsung infringed Apple's "overscroll bounce" patent; the appellate court's appealable ruling affirmed the lower court's February decision doubting the validity of Apple's patent. On September 21, the Mannheim Regional Court ruled in favour of Samsung in that it did not violate Apple's patented features in regards to touch-screen technology. French and Italian courts: Shortly after the release of the iPhone 4S, Samsung filed motions for injunctions in courts in Paris and Milan to block further Apple iPhone sales in France and Italy, claiming the iPhone infringed on two separate patents of the Wideband Code Division Multiple Access standard. Samsung reportedly singled out the French and Italian markets as key electronic communications markets in Europe, and by filing suit in a different court, avoided going back to the German court where it had lost a round earlier in its battle with Apple. Dutch courts: Apple initially sued Samsung on grounds of patent infringement, specifically European patents 2.059.868, 2.098.948, and 1.964.022. On the 24th of October, 2011, a court in the Hague ruled only a photo gallery app in Android 2.3 was indeed infringing a patent (EP 2.059.868), resulting in an import ban of three Samsung telephones (the Galaxy S, Galaxy S II, and Ace) running the infringing software. Phones operating more recent versions of Android remained unaffected. This made the import and sale of the banned phone models with updated software still legal. This ruling was widely interpreted as a favourable one for Samsung, and an appeal by Apple may still be forthcoming.On September 26, 2011, Samsung counter-sued and asked the court for an injunction on sale Apple's iPad and iPhones, on the grounds that Apple does not have the licenses to use 3G mobile technology. On October 14, the court ruled, denying the sales ban and stating that because 3G was an industry standard, Samsung's licensing offer had to meet FRAND (fair, reasonable and nondiscriminatory) terms. The court found that Samsung's fee was unreasonable, but noted that, if the companies cannot make a fair and reasonable licensing fee, Samsung could open a new case against Apple.In late October 2011, the civil court in The Hague ruled for Apple in rejecting Samsung's infringement arguments and denied Samsung's motion made there; Samsung appealed the decision and in January 2012, the Dutch appeals court overruled the civil court decision, rejecting Apple's claim that Samsung's Galaxy Tab 10.1 infringed its design rights. Australian courts: Also in early 2011, an Australian federal court granted Apple's request for an injunction against Samsung's Galaxy Tab 10.1. Samsung agreed to an expedited appeal of the Australian decision in the hope that if it won its appeal before Christmas, it might salvage holiday sales that it would otherwise lose. Ultimately, the injunction Apple sought to block the Tab 10.1 was denied by the High Court of Australia. In July 2012 an Australian judge started hearing the companies' evidence for a trial anticipated to take three months. British courts: Samsung applied to the High Court of Justice, Chancery Division, in Samsung Electronics (UK) Limited & Anr v. Apple Inc., for a declaration that its Galaxy tablets were not too similar to Apple's products. Apple counterclaimed, but Samsung prevailed after a British judge ruled Samsung's Galaxy tablets were not similar enough to be confused with Apple's iPad. In July 2012, Birss J denied Samsung's motion for an injunction blocking Apple from publicly stating that the Galaxy infringed Apple's design rights, but ordered Apple to publish a disclaimer on Apple's own website and in the media that Samsung did not copy the iPad. The judge stayed the publishing order, however, until Apple's appeal was heard in October 2012. When the case reached the court of appeal, the previous ruling was supported, meaning that Apple is required to publish a disclaimer on Apple's own website and in the media that Samsung did not copy the iPad. U.S. courts: First U.S. trial In two separate lawsuits, Apple accused Samsung of infringing on three utility patents (United States Patent Nos. 7,469,381, 7,844,915, and 7,864,163) and four design patents (United States Patent Nos. D504,889, D593,087, D618,677, and D604,305). Samsung accused Apple of infringing on United States Patent Nos. 7,675,941, 7,447,516, 7,698,711, 7,577,460, and 7,456,893. One 2005 design patent "at the heart of the dispute is Design Patent 504,889", which consists of a one-sentence claim about the ornamental design of an electronic device, accompanied by nine figures depicting a thin rectangular cuboid with rounded corners. A U.S. jury trial was scheduled for July 30, 2012 and calendared by the court through September 7, 2012. Both Phil Schiller and Scott Forstall testified on the Apple v. Samsung trial. U.S. courts: First trial verdict On August 24, 2012, the jury returned a verdict largely favorable to Apple. It found that Samsung had willfully infringed on Apple's design and utility patents and had also diluted Apple's trade dresses related to the iPhone. The jury awarded Apple $1.049 billion in damages and Samsung zero damages in its counter suit. The jury found Samsung infringed Apple's patents on iPhone's "Bounce-Back Effect" (US Patent No. 7,469,381), "On-screen Navigation" (US Patent No. 7,844,915), and "Tap To Zoom" (US Patent No. 7,864,163), and design patents that covers iPhone's features such as the "home button, rounded corners and tapered edges" (US$593087) and "On-Screen Icons" (US$604305). Design Patent 504,889 (describing the ornamental design of the iPad) was one of the few patents the jury concluded Samsung had not infringed. This amount is functionally reduced by the bond posted by Apple for the injunction granted during the trial (see below). U.S. courts: On October 23, 2012, U.S. Patent and Trademark Office tentatively invalidated Apple's bounce back patent (US Patent No. '381) possibly affecting the ruling in the Apple v. Samsung trial. Apple's attorneys filed a request to stop all sales of the Samsung products cited in violation of the US patents, a motion denied by Judge Lucy H. Koh on December 17, 2012, who also decided that the jury had miscalculated US$400 million in its initial damage assessment and ordered a retrial. U.S. courts: Injunction of U.S. sales during first trial The injunction Apple sought in the U.S. to block Samsung smartphones such as the Infuse 4G and the Droid Charge was denied. Judge Koh ruled that Apple's claims of irreparable harm had little merit because although Apple established a likelihood of success at trial on the merits of its claim that Samsung infringed one of its tablet patents, Apple had not shown that it could overcome Samsung's challenges to the patent's validity.Apple appealed Judge Koh's ruling, and on May 14, 2012, the appeals court reversed and ordered Judge Koh to issue the injunction. The preliminary injunction was granted in June 2012, preventing Samsung from making, using, offering to sell, selling, or importing into the U.S. the Galaxy Nexus and any other of its technology making use of the disputed patent. Simultaneously, Apple was ordered to post a US$95.6 million bond in the event that Samsung prevailed at trial.Following the trial, in which the Nexus was found not to infringe Apple's patents, Samsung filed an appeal to remove the preliminary injunction. On October 11, 2012, the appeals court agreed and vacated the injunction.A new hearing was held in March 2014, in which Apple sought to prevent Samsung from selling some of its current devices in U.S. At the hearing, Judge Koh ruled against a permanent injunction. U.S. courts: First trial appeal There was an interview given by the jury foreman, where, at the 3 minute mark in the video, the jury foreman Hogan said: "the software on the Apple side could not be placed into the processor on the prior art and vice versa, and that means they are not interchangeable," and at the 2:42-2:45 minute mark, in which Hogan states "each patent had a different legal premise." Groklaw reported that this interview indicates the jury may have awarded inconsistent damages and ignored the instructions given to them. In an article on Gigaom, Jeff John Roberts contended that the case suggests that juries should not be allowed to rule on patent cases at all. Scott McKeown, however, suggested that Hogan's comment may have been poorly phrased.Some have claimed that there are a few oddities with Samsung's U.S. Patent discussed by Hogan during the interview, specifically that the '460 patent has only one claim. Most US patents have between 10 - 20 separate claims, most of which are dependent claims. This patent was filed as a division of an earlier application, possibly in anticipation of litigation, which may explain the reduced number of claims. The specifics of this patent have not been discussed in the Groklaw review or the McKeown review because most believe that the foreman misspoke when he mentioned the number of the patent in question; a more detailed interview with the BBC made it clear that the patent(s) relevant to the prior art controversy were owned by Apple, not Samsung, meaning that his mention of the "460 patent" was a mistake. U.S. courts: On Friday, September 21, 2012, Samsung requested a new trial from the judge in San Jose arguing that the verdict was not supported by evidence or testimony, that the judge imposed limits on testimony time and the number of witnesses prevented Samsung from receiving a fair trial, and that the jury verdict was unreasonable. Apple filed papers on September 21 and 22, 2012 seeking a further amount of interest and damages totaling $707 million. A hearing has been scheduled in U.S. District Court on December 6, 2012, to discuss these and other issues.On October 2, 2012, Samsung appealed the decision to the United States Court of Appeals for the Federal Circuit, requesting that Apple's victory be thrown out, claiming that the foreman of the jury had not disclosed that he had been sued by Seagate Technology Inc., his former employer, and which has a strategic relationship with Samsung, despite having been asked during jury selection if he had been involved in lawsuits. Samsung also claimed that the foreman had not revealed a past personal bankruptcy. The foreman responded that he had been asked during jury selection whether he had been involved in any lawsuits during the past 10 years, so that the events claimed by Samsung occurred before that time frame, although his claim is not consistent with the actual question he was asked by the Judge. Apple has similarly appealed the decision vacating the injunction on Samsung's sales. U.S. courts: Leading up to a December 4, 2014 hearing at the United States Court of Appeals for the Federal Circuit, Samsung had noted that the USPTO had released preliminary and/or final findings of invalidity against some of the patents relevant to the first case, namely the so-called pinch-to-zoom patent 7,844,915. Samsung argued for, at the very least, a recalculation of the damages they owe in the case. On May 18, 2015, the Federal Circuit affirmed parts of the jury verdict, but vacated the jury's damages awards against the Samsung products that were found liable for trade dress dilution. U.S. courts: First trial controversy The ruling in the landmark patent case raised controversies over the impact on the consumers and the smartphone industry. The jury's decision was described as being 'Apple-friendly' by Wired and a possible reason for the increased costs—because of licensing fees to Apple—that subsequently affected Android smartphone users. A question was also raised about the validity of lay juries in the U.S. patent system, whereby the qualifications of the jury members were deemed inadequate for a complex patent case; however, it was later revealed that the jury foreman Velvin Hogan was an electrical engineer and a patent holder himself. Hogan's post-verdict interviews with numerous media outlets raised a great deal of controversy over his role as the jury foreman. He told Bloomberg TV that his experience with patents had helped to guide the jurors' decisions in the trial. A juror Manuel Ilagan said in an interview with CNET a day after the verdict that "Hogan was jury foreman. He had experience. He owned patents himself … so he took us through his experience. After that it was easier." As the jury instructions stated that jurors can make decisions based solely on the law as instructed and "not based on your understanding of the law based on your own cases," controversy was consequently generated.Hogan also told the Reuters news agency that the jury wanted to make sure the message it sent was not just a "slap on the wrist" and wanted to make sure it was sufficiently high to be painful, but not unreasonable. His remark does not corroborate with jury instructions that state: "the damages award should put the patent holder in approximately the financial position it would have been in had the infringement not occurred" and "it is meant to compensate the patent holder and not to punish an infringer." Samsung appealed against the decision, claiming jury misconduct, and Samsung can be given a new trial if the appeal court finds that there was juror misconduct.Other questions were raised about the jury's quick decision. The jury was given more than 700 questions, including highly technical matters, to reach the verdict and awarded Apple more than US$1 billion in damages after less than three days of deliberations. Critics claimed that the nine jurors did not have sufficient time to read the jury instructions. A juror stated in an interview with CNET that the jury decided after the first day of deliberations that Samsung was in the wrong. U.S. courts: First Retrial of damages amount from first U.S. trial In a damage-only retrial court session on November 13, 2013, ordered in relation to the first U.S. trial by Judge Koh in December 2012, Samsung Electronics stated in a San Jose, U.S. courtroom that Apple's hometown jury found Samsung copied some elements of Apple's design. Samsung's attorney clarified the purpose of the damage-only retrial and stated, "This is a case not where we're disputing that the 13 phones contain some elements of Apple's property," but the company disputed the US$379.8 million amount that Apple claimed that it is owed in the wake of Samsung's—Samsung presented a figure of US$52 million. U.S. courts: On November 21, 2013, the jury awarded a new figure of US$290 million. The following devices were the concern of the retrial: Captivate, Continuum, Droid Charge, Epic 4G, Exhibit 4G, Galaxy Prevail, Galaxy Tab, Gem, Indulge, Infuse 4G, Nexus S 4G, Replenish, and Transform. U.S. courts: Supreme Court decision of First Trial On December 6, 2016, the United States Supreme Court decided 8–0 to reverse the decision from the first trial that awarded nearly $400 million to Apple and returned the case to Federal Circuit court to define the appropriate legal standard to define "article of manufacture" because it is not the smartphone itself, but could be just the case and screen to which the design patents relate. U.S. courts: Second Retrial of damages amount from first U.S. trial On Sunday, October 22, 2017, district court judge Lucy Koh ordered a second retrial of damages based upon the limitations imposed by the above decision of the United States Supreme Court. The parties were ordered to propose a schedule for a new trial by Wednesday, October 25.The jury trial for damages concluded on May 24, 2018, awarding Apple $539 million, which includes $399 million for damages of Samsung's products sold that infringed on the patents. U.S. courts: Second U.S. trial Apple filed a new U.S. lawsuit in February 2012, asserting Samsung's violation of five Apple patents across Samsung's product lines for its Admire, Galaxy Nexus, Galaxy Note, Galaxy Note II, Galaxy S II, Galaxy S II Epic 4G Touch, Galaxy S II Skyrocket, Galaxy S III, Galaxy Tab II 10.1, and Stratosphere. Samsung responded with a counterclaim, stating that two patents for nine phones and tablets have been infringed on by Apple across its iPhone 4, iPhone 4S, iPhone 5, iPad 2, iPad 3, iPad 4, iPad mini, iPod touch (5th generation), iPod touch (4th generation), and MacBook Pro lines. Samsung stood to gain US$6 million if the jury rules in its favor, while Apple was seeking US$2 billion in damages and could proceed with similar lawsuits against other Android handset makers, as the relevant patent issues extend beyond Samsung's software technology.The second trial was scheduled for March 2014 and jury selection occurred on March 31, 2014. Judge Koh referred to the new lawsuit as "one action in a worldwide constellation of litigation between the two companies."The trial began in early April and decision was delivered on May 2, 2014, and Samsung was instructed to pay US$119.6 million to Apple for smartphone patent violations, a compensatory amount that was termed a "big loss" by The Guardian's "Technology" team—the media outlet described the victory as "pyrrhic." The jury found that Samsung had infringed upon two Apple patents and Brian Love, assistant professor at the Santa Clara University law school, explained: "This amount is less than 10% of the amount Apple requested, and probably doesn't surpass by too much the amount Apple spent litigating this case." Apple's official response was a reaffirmation that "Samsung willfully stole" from the Cupertino, US-based corporation; however, Apple's lawyers claimed that a technical mistake has been made by the jury and Koh ordered the jurors to return on May 5, 2014, to resolve an issue that is potentially worth several hundred thousand dollars.The jury also found Apple liable for infringing one of Samsung's patents and the South Korean corporation, which had initially sought US$6 million of damages, was awarded US$158,400. In the wake of the verdict, Judge Koh will be responsible for deciding whether a sales ban of Samsung products will be implemented, a decision that was deemed highly unlikely by legal experts, such as Rutgers Law School's Michael Carrier, after the verdict announcement.Samsung appealed the jury verdict to a three-judge panel of the United States Court of Appeals for the Federal Circuit in 2015, and won in February 2016, with the panel nullifying the jury verdict. The panel unanimously argued that one patent cited by Apple was not infringed by Samsung, while two others, related to autocorrect and "slide to unlock" features, were invalid based on existing prior art. Apple requested an en banc hearing from the full Federal Circuit, which ruled in favor of Apple by an 8-3 decision, restoring the $120 million award, in October 2016. While the original three judges maintained their opinion from the previous hearing, the remaining judges argued that the three-member panel had dismissed the body of evidence from the jury trial supporting that Apple's patents were valid and Samsung was infringing upon them.Samsung appealed to the Supreme Court, but the Court announced in November 2017 that it would not hear the appeal, leaving the Federal Circuit's ruling in Apple's favor in place.As of mid 2018, the trials over the patent dispute have been resolved, resulting in Apple being awarded $539 million.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ankle replacement** Ankle replacement: Ankle replacement, or ankle arthroplasty, is a surgical procedure to replace the damaged articular surfaces of the human ankle joint with prosthetic components. This procedure is becoming the treatment of choice for patients requiring arthroplasty, replacing the conventional use of arthrodesis, i.e. fusion of the bones. The restoration of range of motion is the key feature in favor of ankle replacement with respect to arthrodesis. However, clinical evidence of the superiority of the former has only been demonstrated for particular isolated implant designs. History: Since the early 1970s, the disadvantages of ankle arthrodesis and the excellent results attained by arthroplasty at other human joints have encouraged numerous prosthesis designs also for the ankle. In the following decade, the disappointing results of long-term follow-up clinical studies of the pioneering designs has left ankle arthrodesis as the surgical treatment of choice for these patients. More modern designs have produced better results, contributing to a renewed interest in total ankle arthroplasty over the past decade.Nearly all designs from pioneers featured two components; these designs have been categorized as incongruent and congruent, according to the shape of the two articular surfaces. After the early unsatisfactory results of the two-component designs, most of the more recent designs feature three components, with a polyethylene meniscal bearing interposed between the two metal bone-anchored components. This meniscal bearing should allow full congruence at the articular surfaces in all joint positions in order to minimize wear and deformation of the components. Poor understanding of the functions of the structures guiding ankle motion in the natural joint (ligaments and articular surfaces), and poor restoration of these functions in the replaced joint may be responsible for the complications and revisions. Prosthetic design: The main objectives of the prosthetic design for ankle joint replacements are: to replicate original joint function, by restoring an appropriate kinematics at the replaced joint; to permit a good fixation of the components, which would involve an appropriate load transfer to the bone and minimum risk of loosening; to guarantee longevity of the implant, which is mainly related to wear resistance; to attain feasibility of implantation given the small dimensions of the joint.As with other joint replacements, the traditional dilemma between mobility and congruency must be addressed. Unconstrained or semiconstrained designs allow the necessary mobility but require incongruent contact, thereby giving rise to large contact stresses and potentially high wear rates. Conversely, congruent designs produce large contact areas with low contact stresses but transmit undesirable constraint forces that can overload the fixation system at the bone-component interface. Indications: The indications for the operation in general are as follow: patients with primary or posttraumatic osteoarthritis with relatively low functional demand; patients with severe ankle rheumatoid arthritis but not severe osteoporosis of the ankle; patients suitable for arthrodesis but rejecting it.The general contraindications are: varus or valgus deformity greater than 15 degrees, severe bony erosion, severe talus subluxation; substantial osteoporosis or osteonecrosis particularly affecting the talus; previous or current infections of the foot; vascular disease or severe neurologic disorders; previous arthrodesis of the ipsilateral hip or knee or severe deformities of these joints.Other potential contraindications such as capsuloligamentous instability and hindfoot or forefoot deformities affecting correct posture, are not considered relevant if resolved before or during this surgery. Outcome: The outcome of an ankle replacement includes factors like ankle function, pain, revision and implant survival. Outcome studies on modern designs show a five-year implant survival rate between 67% and 94%. and ten-year survival rates around 75%. Mobile bearing designs have enabled implant survival rates to continue to improve, reaching as high as 95% for five years and 90% for ten years. Ankle replacements have a 30-day readmission rate of 2.2%, which is similar to that of knee replacement but lower than that of total hip replacement. 6.6% of patients undergoing primary TAR require a reoperation within 12 months of the index procedure. Early revision rates are significantly higher in low-volume centres.Clinical ankle scores, such as the American Orthopaedic Foot and Ankle Society (AOFAS) or the Manchester Oxford Foot & Ankle Questionnaire are outcome rating system for ankle replacements. Further outcome instruments include radiographic assessment of component stability and migration, and the assessment of its functionality in daily life using gait analysis or videofluoroscopy; the latter is a tool for three-dimensional measuring of the position and orientation of implanted prosthetic components at the replaced joints.Research comparing the effects of ankle replacement against ankle fusion (the TARVA study) is ongoing in the United Kingdom, in a randomised controlled trial to compare the clinical and cost-effectiveness of these treatments. The TARVA protocol has been published in the British Medical Journal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Division lattice** Division lattice: The division lattice is an infinite complete bounded distributive lattice whose elements are the natural numbers ordered by divisibility. Its least element is 1, which divides all natural numbers, while its greatest element is 0, which is divisible by all natural numbers. The meet operation is greatest common divisor while the join operation is least common multiple. The prime numbers are precisely the atoms of the division lattice, namely those natural numbers divisible only by themselves and 1. Division lattice: For any square-free number n, its divisors form a Boolean algebra that is a sublattice of the division lattice. The elements of this sublattice are representable as the subsets of the set of prime factors of n. The converse also holds, namely that every sublattice of the division lattice that forms a Boolean algebra is isomorphic to the lattice of divisors of a square-free number.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OR10AG1** OR10AG1: Olfactory receptor 10AG1 is a protein that in humans is encoded by the OR10AG1 gene.Olfactory receptors interact with odorant molecules in the nose, to initiate a neuronal response that triggers the perception of a smell. The olfactory receptor proteins are members of a large family of G-protein-coupled receptors (GPCR) arising from single coding-exon genes. Olfactory receptors share a 7-transmembrane domain structure with many neurotransmitter and hormone receptors and are responsible for the recognition and G protein-mediated transduction of odorant signals. The olfactory receptor gene family is the largest in the genome. The nomenclature assigned to the olfactory receptor genes and proteins for this organism is independent of other organisms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opacity (optics)** Opacity (optics): Opacity is the measure of impenetrability to electromagnetic or other kinds of radiation, especially visible light. In radiative transfer, it describes the absorption and scattering of radiation in a medium, such as a plasma, dielectric, shielding material, glass, etc. An opaque object is neither transparent (allowing all light to pass through) nor translucent (allowing some light to pass through). When light strikes an interface between two substances, in general some may be reflected, some absorbed, some scattered, and the rest transmitted (also see refraction). Reflection can be diffuse, for example light reflecting off a white wall, or specular, for example light reflecting off a mirror. An opaque substance transmits no light, and therefore reflects, scatters, or absorbs all of it. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including opacity, transparency and translucency among the involved aspects. Both mirrors and carbon black are opaque. Opacity depends on the frequency of the light being considered. For instance, some kinds of glass, while transparent in the visual range, are largely opaque to ultraviolet light. More extreme frequency-dependence is visible in the absorption lines of cold gases. Opacity can be quantified in many ways; for example, see the article mathematical descriptions of opacity. Opacity (optics): Different processes can lead to opacity including absorption, reflection, and scattering. Etymology: Late Middle English opake, from Latin opacus ‘darkened’. The current spelling (rare before the 19th century) has been influenced by the French form. Radiopacity: Radiopacity is preferentially used to describe opacity of X-rays. In modern medicine, radiodense substances are those that will not allow X-rays or similar radiation to pass. Radiographic imaging has been revolutionized by radiodense contrast media, which can be passed through the bloodstream, the gastrointestinal tract, or into the cerebral spinal fluid and utilized to highlight CT scan or X-ray images. Radiopacity is one of the key considerations in the design of various devices such as guidewires or stents that are used during radiological intervention. The radiopacity of a given endovascular device is important since it allows the device to be tracked during the interventional procedure. Quantitative definition: The words "opacity" and "opaque" are often used as colloquial terms for objects or media with the properties described above. However, there is also a specific, quantitative definition of "opacity", used in astronomy, plasma physics, and other fields, given here. In this use, "opacity" is another term for the mass attenuation coefficient (or, depending on context, mass absorption coefficient, the difference is described here) κν at a particular frequency ν of electromagnetic radiation. Quantitative definition: More specifically, if a beam of light with frequency ν travels through a medium with opacity κν and mass density ρ , both constant, then the intensity will be reduced with distance x according to the formula where x is the distance the light has traveled through the medium I(x) is the intensity of light remaining at distance x I0 is the initial intensity of light, at x=0 For a given medium at a given frequency, the opacity has a numerical value that may range between 0 and infinity, with units of length2/mass. Quantitative definition: Opacity in air pollution work refers to the percentage of light blocked instead of the attenuation coefficient (aka extinction coefficient) and varies from 0% light blocked to 100% light blocked: Planck and Rosseland opacities It is customary to define the average opacity, calculated using a certain weighting scheme. Planck opacity (also known as Planck-Mean-Absorption-Coefficient) uses the normalized Planck black-body radiation energy density distribution, Bν(T) , as the weighting function, and averages κν directly: where σ is the Stefan–Boltzmann constant. Quantitative definition: Rosseland opacity (after Svein Rosseland), on the other hand, uses a temperature derivative of the Planck distribution, u(ν,T)=∂Bν(T)/∂T , as the weighting function, and averages κν−1 The photon mean free path is λν=(κνρ)−1 . The Rosseland opacity is derived in the diffusion approximation to the radiative transport equation. It is valid whenever the radiation field is isotropic over distances comparable to or less than a radiation mean free path, such as in local thermal equilibrium. In practice, the mean opacity for Thomson electron scattering is: where X is the hydrogen mass fraction. For nonrelativistic thermal bremsstrahlung, or free-free transitions, assuming solar metallicity, it is: The Rosseland mean attenuation coefficient is:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Classical Nahuatl grammar** Classical Nahuatl grammar: The grammar of Classical Nahuatl is agglutinative, head-marking, and makes extensive use of compounding, noun incorporation and derivation. That is, it can add many different prefixes and suffixes to a root until very long words are formed. Very long verbal forms or nouns created by incorporation, and accumulation of prefixes are common in literary works. New words can thus be easily created. Orthography used in this article: Vowel length was phonologically distinctive in Classical Nahuatl, but vowel length was rarely transcribed in manuscripts, leading to occasional difficulties in discerning whether a given vowel was long or short. In this article, long vowels are indicated with a macron above the vowel letter: <ā, ē, ī, ō>. Another feature which is rarely marked in manuscripts is the saltillo or glottal stop ([ʔ]). In this article, the saltillo is indicated with an h following a vowel. The grammarian Horacio Carochi (1645) represented saltillo by marking diacritics on the preceding vowel: grave accent on nonfinal vowels <à, ì, è, ò> and circumflex on final vowels <â, î, ê, ô>. Carochi is almost alone among colonial-era grammarians in consistently representing both saltillo and vowel length in transcription, even though they are both essential to a proper understanding of Classical Nahuatl. Morphophonology: The phonological shapes of Nahuatl morphemes may be altered in particular contexts, depending on the shape of the adjacent morphemes or their position in the word. Assimilation Where a morpheme ending in a consonant is followed by a morpheme beginning in a consonant, one of the two consonants often undergoes assimilation, adopting features of the other consonant. Almost all doubled consonants in Nahuatl are produced by the assimilation of two different consonants from different morphemes. Doubled consonants within a single morpheme are rare, a notable example being the verb -itta "see", and possibly indicates a fossilized double morpheme. Morphology: The words of Nahuatl can be divided into three basic functional classes: verbs, nouns and particles. Adjectives exist, but they generally behave like nouns and there are very few adjectives that are not derived from either verbal or nominal roots. The few adverbs that can be said to exist fall into the class of particles. Morphology: Nouns Classical Nahuatl is a non-copulative language, meaning it lacks a verb meaning 'to be.' Instead, this meaning is conveyed by simply inflecting a noun as a verb. In other words from the perspective of an English speaker, one can describe each Classical Nahuatl noun as a specific verb meaning "to be X."Example: ti + amolnamacac 'soap seller', becomes tamolnamacac, meaning 'you are a soap seller' (See verb inflection below). Morphology: The noun is inflected for two basic contrasting categories: possessedness: non-possessed contrasts with possessed number: singular contrasts with pluralNouns belong to one of two classes: animates or inanimates. Originally the grammatical distinction between these were that inanimate nouns had no plural forms, but in most modern dialects both animate and inanimate nouns are pluralizable. Nominal morphology is mostly suffixing. Some irregular formations exist. Morphology: Possessedness Non-possessed nouns take a suffix called the absolutive. This suffix takes the form -tl after vowels (ā-tl, "water") and -tli after consonants, which assimilates with a final /l/ on the root (tōch-tli, "rabbit", but cal-li, "house"). Some nouns are irregular and, for the absolutive suffix, instead take -in (mich-in, fish). In most derived forms, any of these suffixes would drop: tōch-cal-li, "rabbit-hole", mich-matla-tl, "fishing net". Possessed nouns do not take the absolutive suffix (see Noun inflection below), but do receive a prefix to denote the possessor. Morphology: Number The absolutive singular suffix has three basic forms: -tl/tli, -lin/-in, and some irregular nouns with no suffix. The absolutive plural suffix has three basic forms: -tin, -meh, or just a final glottal stop -h. Some plurals are formed also with reduplication of the noun's first or second syllable, with the reduplicated vowel long. The possessive singular suffix has two basic forms: -uh (on stems ending in a vowel) or -Ø (on stems ending in a consonant). The possessive plural suffix has the form -huān.Only animate nouns can take a plural form. These include most animate living beings, but also words like tepētl ("mountain"), citlālin ("star") and some other phenomena. The plural is not totally stable and in many cases several different forms are attested. Noun inflection Possessor prefixes If a given prefix ends with a vowel (apart from 3rd person singular), that vowel may be elided depending on the following sound. The vowel will only be elided if the word's stem begins with a "stronger" vowel. Generally, the hierarchy of vowels, from strongest to weakest, is a/e, o, i. Example: to + amolli, becomes tamol, meaning 'our soap' Some other categories can be inflected on the noun such as: Honorific formed with the suffix -tzin. Morphology: Inalienable possession The suffix -yo — the same suffix as the abstract/collective -yō(tl) — may be added to a possessed noun to indicate that it is a part of its possessor, rather than just being owned by it. For example, both nonac and nonacayo (possessed forms of nacatl) mean "my meat", but nonac may refer to meat that one has to eat, while nonacayo refers to the flesh that makes up one's body. This is known as inalienable, integral or organic possession. Morphology: Derivational morphology -tia derives from noun X a verb with an approximate meaning of "to provide with X " or "to become X." -huia derives from noun X a verb with an approximate meaning of "to use X " or "to provide with X." -yōtl derives from a noun X a noun with an abstract meaning of "X-hood or X-ness." -yoh derives from a noun X a noun with a meaning of "thing full of X" or "thing with a lot of X." Verbs All verbs are marked with prefixes in order to agree with the person of the subject, and, where there is one, the object. In addition, verbs take a special suffix to mark plural subjects (only animates take plural agreement). Morphology: An example of an intransitive verb, with subject marking: niyōli 'I live,' tiyōli 'you (singular) live,' yōli he, she, it lives,' tiyōlih 'we live,' anyōlih 'you (plural) live,' yōlih 'they live.' Subject and object marking The person prefixes are identical for all tenses and moods (with the exception of the imperative, whose prefix is x(i)-), but the plural number suffix varies according to tense or mood. In the table below, Ø- indicates there is no prefix. Morphology: Note that prefix ti- means 'you (singular)' with no number suffix on the verb, but ti- plus the plural suffix (in the present -h) means 'we'. The imperative prefixes can only be used in the second person; for other persons, use the optative mood. Morphology: As mentioned previously, verbal subject prefixes can also be used with nouns, to create a nominal predicate: nicihuātl 'I am a woman,' toquichtli 'you are a man,' 'nimēxicah 'we are Mexica.' Transitive and bitransitive verbs take a distinct set of prefixes (after the subject marking, but before the stem), to mark the object: The object always must be marked on a transitive verb. If the object is unknown or is simply 'things/people in general' the unspecified object prefixes may be used. Compare niccua 'I am eating it (i.e. something specific)' to nitlacua 'I am eating'. Morphology: Plural suffixes are never used to mark plural objects, only plural subjects. Unspecified objects are never plural. Morphology: A Classical Nahuatl verb thus has the following structure: SUBJECT PREFIX + OBJECT PREFIX + VERB STEM + SUBJECT NUMBER (example: ti-quim-itta-h, we – them – see – plural, i.e., 'we see them') Direct arguments of the verb – that is, subject and object – are obligatorily marked on the verb. If there are both direct and indirect objects (which are not morphologically distinguished), only one may be marked on the verb. Morphology: Other inflectional categories may be optionally marked, for example direction of motion. Other inflections include the applicative and causative, both valency changing operations; that is, they increase the number of arguments associated with a verb, transforming an intransitive verb into transitive, or a transitive verb into bitransitive. Morphology: Tense and mood inflection The different tenses and moods are formed, somewhat as in Latin or Ancient Greek, by adding the person inflections to the appropriate verbal base or stem. Base 1 is the normal or citation form of the verb, also known as the imperfective stem, with no special suffixes. Base 2, also known as the perfective stem, is usually shorter in form than base 1, often dropping a final vowel, though formation thereof varies. Base 3, the hypothetical stem, is normally the same as base 1, except for verbs whose stem ending in two vowels, in which case the second vowel is dropped, and the stem vowel is often lengthened in front of a suffix. Morphology: Imperfective tenses The present tense is formed from base 1. The plural subject suffix is -h. Examples: nicochi 'I am sleeping,' tlahtoah 'they are speaking,' nicchīhua 'I am making it.' The imperfect is similar in meaning to the imperfect in the Romance languages. It is formed with base 1, plus -ya or -yah in the plural. Sometimes the final vowel of the stem is lengthened. Examples: nicochiya 'I was sleeping,' tlahtoāyah 'they used to speak,' nicchīhuaya 'I was making it.' The habitual present, customary present, or quotidian tense is formed from base 1. The suffix is -ni, with the stem vowels sometimes lengthened before it. Rather than one specific event this tense expresses the subject's tendency or propensity to repeatedly or habitually perform the same action over time (e.g. miquini 'mortal,' lit. '(one who is) prone to die'. It is frequently translated into English with a noun or noun phrase, for example: cuīcani 'one who sings, singer,' tlahcuiloāni (from ihcuiloa 'write, paint') 'scribe,' or 'tlahtoāni' (from ihtoa 'speak') the title for the ruler of a Mexica city. Plural formation of this form is variable. It can be in -nih or -nimeh. In some cases, the plural does not use -ni at all but instead a preterite ending, as with tlahtohqueh, the plural of tlahtoāni, or tlahcuilohqueh, the plural of tlahcuiloāni. These preterite forms are also used to create possessive forms. Morphology: Perfective tenses The preterite or perfect tense is similar in meaning to the English simple past or present perfect. The singular often ends in -h or -c while the plural suffix is -queh. The preterite is often accompanied by the prefix ō- (sometimes called the augment, or antecessive prefix). The function of this prefix is to mark that the action of the verb is complete at the time of speaking (or in a subordinate clause, at the time of the action described by the main verb). The augment is frequently absent in mythic or historical narratives. Examples: ōnicoch 'I slept,' ōtlatohqueh 'they spoke,' ōnicchīuh 'I made it.' The preterite also can be used to create agentive constructions. Morphology: The pluperfect uses the augment, but with the suffix -ca in the singular and -cah in the plural. The pluperfect roughly corresponds with the English past perfect, although more precisely it indicates that a particular action or state was in effect in the past but that it has been undone or reversed at the time of speaking. Examples: ōnicochca 'I had slept,' ōtlatohcah 'they had spoken,' ōnicchīuhca 'I had made it. Morphology: The vetitive or admonitive mood issues a warning that something may come to pass which the speaker does not desire, and by implication steps should be taken to avoid this (compare the English conjunction lest). The negative of this mood simply warns that a non-occurrence of the action is undesirable. If the preterite singular ends in -c this is replaced by the glottal stop/saltillo. In the plural the ending is -(h)tin or -(h)tih. The admonitive is used in conjunction with particles mā or nēn. Examples: mā nicoch 'be careful, lest I sleep,' mā tlatohtin 'watch out, they may speak,' mā nicchīuh 'don't let me make it.' Hypothetical tenses The future tense has a suffix -z in the singular and -zqueh in the plural. Examples of the future: nicochiz 'I will sleep,' tlahtōzqueh 'they will speak,' nicchīhuaz 'I will make it.' The imperative and optative use the plural suffix -cān. The imperative uses the special imperative subject prefixes, available only in the second person; the optative uses the normal subject prefixes (effectively it is the same mood, but outside of the second person). The imperative is used for commands, the optative is used for wishes or desires, both used in conjunction with particles: mā nicchīhua 'let me make it!' The conditional, irrealis, or counterfactual are all names for the same verbal mood. The suffix is -zquiya (sometimes spelled -zquia) in the singular and -zquiyah in the plural. The basic meaning is that a state or action that was intended or desired did not come to pass. It can be translated as 'would have,' 'almost,' etc. Examples: nicochizquiya 'I would have slept,' tlahtōzquiyah 'they would have spoken,' nicchīhuazquiya 'I would have made it.' Applicative The applicative construction adds an argument to the verb. The role of the added argument can be benefactive, malefactive, indirect object or similar. It is formed by the suffix -lia. Morphology: niquittilia "I see it for him" Causative The causative construction also adds an argument to the verb. This argument is an agent causing the object to undertake the action of the verb. It is formed by the suffix -tia. Morphology: niquittatia "I make him see it/I show it to him" Unspecified Subject/Impersonal/Passive This construction, based on what Andrews calls the "nonactive" stem, is used for the passive voice of transitive verbs and for the "unspecified subject" or "impersonal" construction of both transitive and intransitive verbs. It is derived by adding to an imperfective active stem one of the simple endings -ō, -lō or -hua, or one of the combinations -o-hua, -lo-hua or -hua-lō (a free variant with -hua). Note that -(l)ō is shortened to -(l)o word-finally, according to the general phonological rule that word-finally or before a glottal stop long vowels are reduced. Morphology: The rules for which suffix is added to a given verb stem involve both phonology and transitivity. The suffix -lō is the most common, whereas -lo-hua (note the short vowel, also in -o-hua) is suffixed only to a small number of irregular verbs. In the case of the irregular compound verbs huī-tz "come," and tla-(i)tqui-tz and tla-huīca-tz both meaning "bring something," -lo-hua is suffixed to the embedded verb, i.e. before -tz. Morphology: huītz / tlatquitz / tlahuīcatz > huīlohuatz / itquilohuatz / huīcalohuatzFor transitive verbs being made passive, the subject is discarded and the last-added object becomes the subject. Morphology: tiquincui "you (s.) take them (something animate, e.g. dogs) > cuīloh "they are taken" tinēchincuīlia "you (s.) take them (animate) from me" > niquincuīlīlo "I am deprived of them, someone takes them from me" — note that the 3rd-person plural object prefix, contracted to -im-/in- after -nēch-, returns to its full form -quim-/-quin- when a preceding object prefix is removed.For the impersonal or "unspecified subject" construction, meaning that "one does" or "people do" or sometimes "everyone does" (the action of the verb), the nonactive stem of an intransitive verb is used as is, since an intransitive verb cannot be passive; a transitive verb takes the nonspecific object prefixes -tē- and/or -tla- and the secondary reflexive object prefix -ne-, but cannot take specific object prefixes. Morphology: miqui "he dies" > micohua "there is dying, people are dying" cuīcayah "they (specific people) were singing" > cuīcōya "people were singing, everyone was singing, there was singing" tizahuinih "we customarily abstain from food" > titozahuanih "we customarily make ourselves abstain from food, we customarily fast" (reflexive causative, more common since it implies intentionality) > nezahualo "people customarily fast, everyone customarily fasts" anquintlacualtiah "you (p.) feed them" > tētlacualtīlo "people feed people, people are fed" Directional affixes Deixis: -on- "away from the speaker" on+ tlahtoa "to speak" = ontlahtoa "he/she/it speaks towards there" -huāl- " towards the speaker" huāl+ tlahtoa "to speak" = huāllahtoa "he/she/it speaks towards here"Introvert: Imperfective: -qui "comes towards the speaker in order to X" qui + itta "to see" + qui ="quittaqui "he/she/it will come here to see it" Perfective: -co "has come towards the speaker in order to X" qui + itta "to see" + co =quittaco "he/she/it has come here to see it" Extrovert: Imperfective: -tīuh "goes away from the speaker in order to X" qui + itta "to see" + tīuh ="quittatīuh "he/she/it will go there to see it" Perfective: -to " has gone away from the speaker in order to X" qui + itta "to see" + to =quittato "he/she/it has gone there to see it" Derivational A number of different suffixes exist to derive nouns from verbs: -lli used to derive passivized nouns from verbs. Morphology: -liztli used to derive abstract nouns from verbs. -qui used to derive agentive nouns from verbs. Verbal compounds Two verbs can be compounded with the ligature morpheme -ti-. Relational Nouns and Locatives Spatial and other relations are expressed with relational nouns. Some locative suffixes also exist. Noun Incorporation Noun incorporation is productive in Classical Nahuatl and different kinds of material can be incorporated. Body parts Instruments Objects Syntax: The particle in is important in Nahuatl syntax and is used as a kind of definite article and also as a subordinating particle and a deictic particle, in addition to having other functions. Non-configurationality Classical Nahuatl can be classified as a non-configurational language, allowing many different kinds of word orders, even splitting noun phrases. VSO basic word order The basic word order of Classical Nahuatl is verb initial and often considered to be VSO, but some scholars have argued for it being VOS. However, the language being non-configurational, all word orders are allowed and are used to express different kinds of pragmatic relations, such as thematization and focus. Nouns as predicates An important feature of Classical Nahuatl is that any noun can function as a standalone predicate. For example, calli is commonly translated "house" but could also be translated "(it) is a house". Syntax: As predicates, nouns can take the verbal subject prefixes (but not tense inflection). Thus, nitēuctli means "I am a lord" with the regular first person singular subject ni- attached to the noun tēuctli "lord". Similarly tinocihuāuh means "you are my wife", with the possessive noun nocihuāuh "my wife" attached to the subject prefix ti- "you" (singular). This construction is also seen in the name Tītlācahuān meaning "we are his slaves", a name for the god Tezcatlipoca. Number system: Classical Nahuatl has a vigesimal or base 20 number system. In the pre-Columbian Nahuatl script, the numbers 20, 400 (202) and 8,000 (203) were represented by a flag, a feather, and a bag, respectively. It also makes use of numeral classifiers, similar to languages such as Chinese and Japanese. Number system: Basic numbers Compound numbers Multiples of 20, 400 or 8,000 are formed by replacing cēm- or cēn- with another number. E.g. ōmpōhualli "40" (2×20), mahtlāctzontli "4,000" (10×400), nāuhxiquipilli "32,000" (4×8,000).The numbers in between those above—11 to 14, 16 to 19, 21 to 39, and so forth—are formed by following the larger number with a smaller number which is to be added to the larger one. The smaller number is prefixed with om- or on-, or in the case of larger units, preceded by īpan "on it" or īhuān "with it". E.g. mahtlāctli oncē "11" (10+1), caxtōlonēyi "18" (15+3), cēmpōhualmahtlāctli omōme "32" (20+10+2); cēntzontli caxtōlpōhualpan nāuhpōhualomōme "782" (1×400+15×20+4×20+2). Number system: Classifiers Depending on the objects being counted, Nahuatl may use a classifier or counter word. These include: -tetl for small, round objects (literally "rock") -pāntli for counting rows -tlamantli for foldable or stackable things -ōlōtl for roundish or oblong-shaped things (literally "maize cob")Which classifier a particular object takes is loose and somewhat arbitrary. Ordinal numbers Ordinal numbers (first, second, third, etc.) are formed by preceding the number with ic or inic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aluminide** Aluminide: An aluminide is a compound that has aluminium with more electropositive elements. Since aluminium is near the nonmetals on the periodic table, it can bond with metals differently from other metals. The properties of an aluminide are between those of a metal alloy and those of an ionic compound. Examples: Magnesium aluminide, MgAl Titanium aluminide, TiAl Iron aluminides, including Fe3Al and FeAl Nickel aluminide, Ni3AlSee category for a list.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Code page 950** Code page 950: Code page 950 is the code page used on Microsoft Windows for Traditional Chinese. It is Microsoft's implementation of the de facto standard Big5 character encoding. The code page is not registered with IANA, and hence, it is not a standard to communicate information over the internet, although it is usually labelled simply as big5, including by Microsoft library functions. Terminology and variants: The major difference between Windows code page 950 and "common" (non-vendor-specific) Big5 is the incorporation of a subset of the ETEN extensions to Big5 at 0xF9D6 through 0xF9FE (comprising the seven Chinese characters 碁, 銹, 裏, 墻, 恒, 粧, and 嫺, followed by 34 box drawing characters and block elements). The ranges used by some of the other ETEN extended characters are instead defined as end-user defined (private use) characters.IBM's CCSID 950 comprises single byte code page 1114 (CCSID 1114) and double byte code page 947 (CCSID 947), and, while also a Big5 variant, is somewhat different from Microsoft's code page 950, incorporating some of the ETEN extensions for lead bytes 0xA3, 0xC6, 0xC7 and 0xC8, while omitting those with lead byte 0xF9 (which Microsoft includes), mapping them instead to the Private Use Area as user-defined characters. It also includes two non-ETEN extension regions with trail bytes 0x81–A0, i.e. outside the usual Big5 trail byte range but similar to the Big5+ trail byte range: area 5 has lead bytes 0xF2–F9 and contains IBM-selected characters, while area 9 has lead bytes 0x81–8C and is a user-defined region.Microsoft updated their version of code page 950 in 2000, adding the euro sign (€) at the double-byte code 0xA3E1. IBM refers to the euro sign update of their Big-5 variant as CCSID 1370 (which includes both single-byte (0x80) and double-byte euro signs). It comprises single byte code page 1114 (CCSID 5210) and double byte code page 947 (CCSID 21427).For better compatibility with Microsoft's variant in IBM Db2, IBM also define the pure double-byte Code page 1372 and associated variable-width CCSID 1373, which includes only the double-byte euro sign and matches Microsoft behaviour in which extension regions are included. Single byte codes: The following are the single-byte graphical characters included by IBM. The codes 0x00 though 0x1F and 0x7F may be used for C0 control codes instead, depending on context (compare code page 437, code page 897). As noted above, the single-byte euro sign at 0x80 is not included in IBM CCSIDs 950 or 1373, nor by Microsoft. The rest are parts of a double byte sequence. Private Use Area usage: This mapping is also used in HKSCS where a given glyph is not yet found in the Unicode revision specified.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimorphic root system** Dimorphic root system: A dimorphic root system is a plant root system with two distinct root forms, which are adapted to perform different functions. One of the most common manifestations is in plants with both a taproot, which grows straight down to the water table, from which it obtains water for the plant; and a system of lateral roots, which obtain nutrients from superficial soil layers near the surface. Many plants with dimorphic root systems adapt the levels of rainfall in the surrounding area, growing many surface roots when there is heavy rainfall, and relying on a taproot when rain is scarce. Because of their adaptability to water levels in the surrounding area, most plants with dimorphic root systems live in arid climates with common wet and dry periods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ingleton's inequality** Ingleton's inequality: In mathematics, Ingleton's inequality is an inequality that is satisfied by the rank function of any representable matroid. In this sense it is a necessary condition for representability of a matroid over a finite field. Let M be a matroid and let ρ be its rank function, Ingleton's inequality states that for any subsets X1, X2, X3 and X4 in the support of M, the inequality ρ(X1)+ρ(X2)+ρ(X1∪X2∪X3)+ρ(X1∪X2∪X4)+ρ(X3∪X4) ≤ ρ(X1∪X2)+ρ(X1∪X3)+ρ(X1∪X4)+ρ(X2∪X3)+ρ(X2∪X4) is satisfied.Aubrey William Ingleton, an English mathematician, wrote an important paper in 1969 in which he surveyed the representability problem in matroids. Although the article is mainly expository, in this paper Ingleton stated and proved Ingleton's inequality, which has found interesting applications in information theory, matroid theory, and network coding. Importance of inequality: There are interesting connections between matroids, the entropy region and group theory. Some of those connections are revealed by Ingleton's Inequality. Perhaps, the more interesting application of Ingleton's inequality concerns the computation of network coding capacities. Linear coding solutions are constrained by the inequality and it has an important consequence: The region of achievable rates using linear network coding could be, in some cases, strictly smaller than the region of achievable rates using general network coding.For definitions see, e.g. Proof: Theorem (Ingleton's inequality): Let M be a representable matroid with rank function ρ and let X1, X2, X3 and X4 be subsets of the support set of M, denoted by the symbol E(M). Then: ρ(X1)+ρ(X2)+ρ(X1∪X2∪X3)+ρ(X1∪X2∪X4)+ρ(X3∪X4) ≤ ρ(X1∪X2)+ρ(X1∪X3)+ρ(X1∪X4)+ρ(X2∪X3)+ρ(X2∪X4).To prove the inequality we have to show the following result: Proposition: Let V1,V2, V3 and V4 be subspaces of a vector space V, then dim(V1∩V2∩V3) ≥ dim(V1∩V2) + dim(V3) − dim(V1+V3) − dim(V2+V3) + dim(V1+V2+V3) dim(V1∩V2∩V3∩V4) ≥ dim(V1∩V2∩V3) + dim(V1∩V2∩V4) − dim(V1∩V2) dim(V1∩V2∩V3∩V4) ≥ dim(V1∩V2) + dim(V3) + dim(V4) − dim(V1+V3) − dim(V2+V3) − dim(V1+V4) − dim(V2+V4) − dim(V1+V2+V3) + dim(V1+V2+V4) dim (V1) + dim(V2) + dim(V1+V2+V3) + dim(V1+V2+V4) + dim(V3+V4) ≤ dim(V1+V2) + dim(V1+V3) + dim(V1+V4) + dim(V2+V3) + dim(V2+V4)Where Vi+Vj represent the direct sum of the two subspaces. Proof: Proof (proposition): We will use frequently the standard vector space identity: dim(U) + dim(W) = dim(U+W) + dim(U∩W). Proof: 1. It is clear that (V1∩V2) + V3 ⊆ (V1+ V3) ∩ (V2+V3), then 2. It is clear that (V1∩V2∩V3) + (V1∩V2∩V4) ⊆ (V1∩V2), then 3. From (1) and (2) we have: 4. From (3) we have If we add (dim(V1)+dim(V2)+dim(V3+V4)) at both sides of the last inequality, we get Since the inequality dim(V1∩V2∩V3∩V4) ≤ dim(V3∩V4) holds, we have finished with the proof.♣ Proof (Ingleton's inequality): Suppose that M is a representable matroid and let A = [v1 v2 … vn] be a matrix such that M = M(A). Proof: For X, Y ⊆ E(M) = {1,2, … ,n}, define U = <{Vi : i ∈ X }>, as the span of the vectors in Vi, and we define W = <{Vj : j ∈ Y }> accordingly. If we suppose that U = <{u1, u2, … ,um}> and W = <{w1, w2, … ,wr}> then clearly we have <{u1, u2, …, um, w1, w2, …, wr }> = U + W. Hence: r(X∪Y) = dim <{vi : i ∈ X } ∪ {vj : j ∈ Y }> = dim(V + W). Finally, if we define Vi = {vr : r ∈ Xi } for i = 1,2,3,4, then by last inequality and the item (4) of the above proposition, we get the result.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SnackWell effect** SnackWell effect: The SnackWell effect is a phenomenon whereby dieters will eat more low-calorie cookies, such as SnackWells, than they otherwise would for normal cookies. Also known as moral license, it is also described as a term for the way people go overboard once they are given a free pass or the tendency of people to overconsume when eating more of low-fat food due to the belief that it is not fattening.The term, which emerged as a reaction to dietary trends in the 1980s and 1990s, is also used for similar effects in other settings, such as energy consumption, where it is termed the "rebound effect". For example, according to a 2008 study, people with energy-efficient washing machines wash more clothes. People with energy-efficient lights leave them on longer, and lose 5–12% of the expected energy savings of 80%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JSON streaming** JSON streaming: JSON streaming comprises communications protocols to delimit JSON objects built upon lower-level stream-oriented protocols (such as TCP), that ensures individual JSON objects are recognized, when the server and clients use the same one (e.g. implicitly coded in). This is necessary as JSON is a non-concatenative protocol (the concatenation of two JSON objects does not produce a valid JSON object). Introduction: JSON is a popular format for exchanging object data between systems. Frequently there's a need for a stream of objects to be sent over a single connection, such as a stock ticker or application log records. In these cases there's a need to identify where one JSON encoded object ends and the next begins. Technically this is known as framing. Introduction: There are four common ways to achieve this: Send the JSON objects formatted without newlines and use a newline as the delimiter. Send the JSON objects concatenated with a record separator control character as the delimiter. Send the JSON objects concatenated with no delimiters and rely on a streaming parser to extract them. Send the JSON objects prefixed with their length and rely on a streaming parser to extract them. Comparison Line-delimited JSON works very well with traditional line-oriented tools. Concatenated JSON works with pretty-printed JSON but requires more effort and complexity to parse. It doesn't work well with traditional line-oriented tools. Concatenated JSON streaming is a superset of line-delimited JSON streaming. Length-prefixed JSON works with pretty-printed JSON. It doesn't work well with traditional line-oriented tools, but may offer performance advantages over line-delimited or concatenated streaming. It can also be simpler to parse. Newline-Delimited JSON: Two terms for equivalent formats of line-delimited JSON are: Newline delimited (NDJSON) - The old name was Line delimited JSON (LDJSON). Newline-Delimited JSON: JSON lines (JSONL)Streaming makes use of the fact that the JSON format does not allow return and newline characters within primitive values (in strings those must be escaped as \r and \n, respectively) and that most JSON formatters default to not including any whitespace, including returns and newlines. These features allow the newline character or return and newline character sequence to be used as a delimiter. Newline-Delimited JSON: This example shows two JSON objects (the implicit newline characters at the end of each line are not shown): The use of a newline as a delimiter enables this format to work very well with traditional line-oriented Unix tools. A log file, for example, might look like: which is very easy to sort by date, grep for usernames, actions, IP addresses, etc. Compatibility Line-delimited JSON can be read by a parser that can handle concatenated JSON. Concatenated JSON that contains newlines within a JSON object can't be read by a line-delimited JSON parser. The terms "line-delimited JSON" and "newline-delimited JSON" are often used without clarifying if embedded newlines are supported. In the past the NDJ specification ("newline-delimited JSON") allowed comments to be embedded if the first two characters of a given line were "//". This could not be used with standard JSON parsers if comments were included. Current version of the specification ("NDJSON - Newline delimited JSON ") no longer includes comments. Concatenated JSON can be converted into line-delimited JSON by a suitable JSON utility such as jq. For example Record separator-delimited JSON: Record separator-delimited JSON streaming allows JSON text sequences to be delimited without the requirement that the JSON formatter exclude whitespace. Since JSON text sequences cannot contain control characters, a record separator character can be used to delimit the sequences. In addition, it is suggested that each JSON text sequence be followed by a line feed character to allow proper handling of top-level JSON objects that are not self delimiting (numbers, true, false, and null). Record separator-delimited JSON: This format is also known as JSON Text Sequences or MIME type application/json-seq, and is formally described in IETF RFC 7464. The example below shows two JSON objects with ␞ representing the record separator control character and ␊ representing the line feed character: Concatenated JSON: Concatenated JSON streaming allows the sender to simply write each JSON object into the stream with no delimiters. It relies on the receiver using a parser that can recognize and emit each JSON object as the terminating character is parsed. Concatenated JSON isn't a new format, it's simply a name for streaming multiple JSON objects without any delimiters. Concatenated JSON: The advantage of this format is that it can handle JSON objects that have been formatted with embedded newline characters, e.g., pretty-printed for human readability. For example, these two inputs are both valid and produce the same output: Implementations that rely on line-based input may require a newline character after each JSON object in order for the object to be emitted by the parser in a timely manner. (Otherwise the line may remain in the input buffer without being passed to the parser.) This is rarely recognised as an issue because terminating JSON objects with a newline character is very common. Length-prefixed JSON: Length-prefixed or framed JSON streaming allows the sender to explicitly state the length of each message. It relies on the receiver using a parser that can recognize each length n and then read the following n bytes to parse as JSON. Length-prefixed JSON: The advantage of this format is that it can speed up parsing due to the fact that the exact length of each message is explicitly stated, rather than forcing the parser to search for delimiters. Length-prefixed JSON is also well-suited for TCP applications, where a single "message" may be divided into arbitrary chunks, because the prefixed length tells the parser exactly how many bytes to expect before attempting to parse a JSON string. Length-prefixed JSON: This example shows two length-prefixed JSON objects (with each length being the byte-length of the following JSON string): Applications and tools: Line-delimited JSON jq can both create and read line-delimited JSON texts. Jackson (API) can read and write line-delimited JSON texts. logstash includes a json_lines codec. ldjson-stream module for Node.js ld-jsonstream dependency free module for Node.js ArduinoJson is a C++ library that supports line-delimited JSON. RecordStream A set of tools to manipulate line delimited JSON (generate, transform, collect statistics, and format results). The Go standard library's encoding/json package can be used to read and write line-delimited JSON. RDF4J and Ontotext GraphDB support NDJSON for JSON-LD (called NDJSONLD) since February 2021. Record separator-delimited JSON jq can both create and read record separator-delimited JSON texts. Concatenated JSON concatjson concatenated JSON streaming parser/serializer module for Node.js Jackson_(API) can read and write concatenated JSON content. jq lightweight flexible command-line JSON processor Noggit Solr's streaming JSON parser for Java Yajl – Yet Another JSON Library. YAJL is a small event-driven (SAX-style) JSON parser written in ANSI C, and a small validating JSON generator. ArduinoJson is a C++ library that supports concatenated JSON. GSON JsonStreamParser.java can read concatenated JSON. json-stream is a streaming JSON parser for python. Length-prefixed JSON missive Fast, lightweight library for encoding and decoding length-prefixed JSON messages over streams Native messaging WebExtensions Native Messaging
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Time-lapse microscopy** Time-lapse microscopy: Time-lapse microscopy is time-lapse photography applied to microscopy. Microscope image sequences are recorded and then viewed at a greater speed to give an accelerated view of the microscopic process. Before the introduction of the video tape recorder in the 1960s, time-lapse microscopy recordings were made on photographic film. During this period, time-lapse microscopy was referred to as microcinematography. With the increasing use of video recorders, the term time-lapse video microscopy was gradually adopted. Today, the term video is increasingly dropped, reflecting that a digital still camera is used to record the individual image frames, instead of a video recorder. Applications: Time-lapse microscopy can be used to observe any microscopic object over time. However, its main use is within cell biology to observe artificially cultured cells. Depending on the cell culture, different microscopy techniques can be applied to enhance characteristics of the cells as most cells are transparent.To enhance observations further, cells have therefore traditionally been stained before observation. Unfortunately, the staining process kills the cells. The development of less destructive staining methods and methods to observe unstained cells has led to that cell biologists increasingly observe living cells. This is known as live cell imaging. A few tools have been developed to identify and analyze single cells during live cell imaging.Time-lapse microscopy is the method that extends live cell imaging from a single observation in time to the observation of cellular dynamics over long periods of time. Time-lapse microscopy is primarily used in research, but is clinically used in IVF clinics as studies has proven it to increase pregnancy rates, lower abortion rates and predict aneuploidyModern approaches are further extending time-lapse microscopy observations beyond making movies of cellular dynamics. Applications: Traditionally, cells have been observed in a microscope and measured in a cytometer. Increasingly this boundary is blurred as cytometric techniques are being integrated with imaging techniques for monitoring and measuring dynamic activities of cells and subcellular structures. History: The Cheese Mites by Martin Duncan from 1903 is one of the earliest microcinematographic films. However, the early development of scientific microcinematography took place in Paris. The first reported time-lapse microscope was assembled in the late 1890s at the Marey Institute, founded by the pioneer of chronophotography, Étienne-Jules Marey. It was, however, Jean Comandon who made the first significant scientific contributions around 1910.Comandon was a trained microbiologist specializing in syphilis research. History: Inspired by Victor Henri's microcinematic work on Brownian motion, he used the newly invented ultramicroscope to study the movements of the syphilis bacteria. At the time, the ultramicroscope was the only microscope in which the thin spiral shaped bacteria was visible. Using an enormous cinema camera bolted to the fragile microscope, he demonstrated visually that the movement of the disease-causing bacteria is uniquely different from the non-disease-causing form. Comandon's films proved instrumental in teaching doctors how to distinguish the two forms.Comandon's extensive pioneering work inspired others to adopt microcinematography. Heniz Rosenberger builds a microcinematograph in the mid 1920s. In collerboration with Alexis Carrel, they used the device to further develop Carrel's cell culturing techniques. Similar work was conducted by Warren Lewis.During World War II, Carl Zeiss AG released the first phase-contrast microscope on the market. With this new microscope, cellular details could for the first time be observed without using lethal stains. History: By setting up some of the first time-lapse experiments with chicken fibroblasts and a phase-contrast microscope, Michael Abercrombie described the basis of our current understanding of cell migration in 1953.With the broad introduction of the digital camera at the beginning of this century, time-lapse microscopy has been made dramatically more accessible and is currently experiencing an unrepresented raise in scientific publications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Style sheet (desktop publishing)** Style sheet (desktop publishing): A style sheet is a feature in desktop publishing programs that store and apply formatting to text. Style sheets are a form of separation of presentation and content: it creates a separate abstraction to keep the presentation isolated from the text data. Style sheets are a common feature in most popular desktop publishing and word processing programs, including Corel Ventura, Adobe InDesign, Scribus, PageMaker, QuarkXPress, WordPerfect, and Microsoft Word, though they may be referred to using slightly different terminology. Use: Individual styles are created by the user and may include a wide variety of commands that dictate how a selected portion of text is formatted: Typeface or font Boldfacing Italicizing Underlining Justification (left, right, center, justify, force justify) Space before and after paragraphs Tab stops and indentation Type size Leading Kerning Tracking Color Borders or strokes Superscript or subscript Dropcaps Letter case Strike through Outline font style HyphenationIn most programs with style sheets, there is a window or menu listing the style sheets the user has associated with the document. For example, a newspaper may have a style sheet for its story text called "Body copy" that sets the type at 10 point Nimrod with 11 point leading and justified alignment. Use: Most programs allow users to name their own styles. Usually easy-to-remember names are used that describe what the style is used for. Common names might include "headline," "subhead" and "byline." To apply a style to a portion of text, most programs allow users to select the text with their mouse and then click on the desired style in a style panel. Some programs split style sheets into two classes: paragraph and character. Use: Paragraph style sheets are applied to an entire paragraph while character styles are applied to only a select number of characters. Character styles are useful when a user needs to format only a small portion of a paragraph. For example, a newspaper may publish lists of current movies by starting with the name of a movie in a bold, sans serif typeface. Then, without starting a new paragraph, the review starts in the standard story text format. In this case, the designer could highlight the movie title and select the appropriate character style to apply the formatting only to the title. The rest of the paragraph can then be styled independently. More advanced layout programs allow users to format more complex paragraphs with a single paragraph style. Using our movie review example above, say the newspaper always places a colon after the movie title and runs 10 short movie reviews as one large story. In this case, the style could be programmed to apply the bold, sans serif typeface at the start of a new paragraph until it encounters a colon. After the colon, the style switches to the standard story text style. Therefore, the designer could highlight the entire collection and apply only one style that will automatically format the entire story without having to go through and apply separate character styles to each of the 10 reviews. Use: Some scorewriters, including MuseScore and Sibelius, implement style sheets to control the appearance and layout of sheet music. Benefits: Style sheets help publications maintain consistency, so common elements such as story text, headlines and bylines always appear the same. Style sheets also help save time allowing a designer to click once rather than having to apply each element one at a time and risk using an incorrect value. Finally, style sheets are also useful if a publication decides to make changes to a design - say, make the story text slightly smaller. A user with proper administrative access can make the change to the master style sheet and then "send" the revised style sheets to all users, so the change is automatically reflected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Federer–Morse theorem** Federer–Morse theorem: In mathematics, the Federer–Morse theorem, introduced by Federer and Morse (1943), states that if f is a surjective continuous map from a compact metric space X to a compact metric space Y, then there is a Borel subset Z of X such that f restricted to Z is a bijection from Z to Y. Moreover, the inverse of that restriction is a Borel section of f—it is a Borel isomorphism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded